Jiannan Huang commited on
Commit
a90bfd0
·
0 Parent(s):

FIX Height of leaderboard

Browse files
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ scripts/
2
+ __pycache__/
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Physical AI Bench Leaderboard
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: true
9
+ license: mit
10
+ short_description: Benchmark for Physical AI generation and understanding
11
+ sdk_version: 5.43.1
12
+ tags:
13
+ - leaderboard
14
+ - physical-ai
15
+ - world-models
16
+ - autonomous-driving
17
+ - robotics
18
+ - embodied-ai
19
+ ---
20
+
21
+ # Physical AI Bench Leaderboard
22
+
23
+ **Physical AI Bench (PAI-Bench)** is a comprehensive benchmark suite for evaluating physical AI generation and understanding across diverse scenarios including autonomous vehicles, robotics, industrial spaces, and everyday ego-centric environments.
24
+
25
+ ## Resources
26
+
27
+ - 🌐 [GitHub Repository](https://github.com/SHI-Labs/physical-ai-bench)
28
+ - 📊 [Predict Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-predict)
29
+ - 📊 [Transfer Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-transfer)
30
+ - 📊 [Reason Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-reason)
31
+
32
+ ## Citation
33
+
34
+ ```bibtex
35
+ @misc{PAIBench2025,
36
+ title={Physical AI Bench: A Comprehensive Benchmark for Physical AI Generation and Understanding},
37
+ author={Fengzhe Zhou and Jiannan Huang and Jialuo Li and Humphrey Shi},
38
+ year={2025},
39
+ url={https://github.com/SHI-Labs/physical-ai-bench}
40
+ }
41
+ ```
42
+
43
+ ---
44
+
45
+ # Configuration
46
+
47
+ Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
48
+
49
+ Results files should have the following format and be stored as json files:
50
+ ```json
51
+ {
52
+ "config": {
53
+ "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
54
+ "model_name": "path of the model on the hub: org/model",
55
+ "model_sha": "revision on the hub",
56
+ },
57
+ "results": {
58
+ "task_name": {
59
+ "metric_name": score,
60
+ },
61
+ "task_name2": {
62
+ "metric_name": score,
63
+ }
64
+ }
65
+ }
66
+ ```
67
+
68
+ Request files are created automatically by this tool.
69
+
70
+ If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
71
+
72
+ # Code logic for more complex edits
73
+
74
+ You'll find
75
+ - the main table' columns names and properties in `src/display/utils.py`
76
+ - the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
77
+ - the logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py`
app.py ADDED
@@ -0,0 +1,628 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import pandas as pd
3
+
4
+
5
+ # Your leaderboard name
6
+ TITLE = """<h1 align="center" id="space-title">Physical AI Bench Leaderboard</h1>"""
7
+
8
+ # CSS to make the leaderboard full height
9
+ CSS = """
10
+ #predict_leaderboard, #reason_leaderboard {
11
+ height: auto !important;
12
+ max-height: none !important;
13
+ }
14
+ #predict_leaderboard .wrap, #reason_leaderboard .wrap {
15
+ max-height: none !important;
16
+ height: auto !important;
17
+ }
18
+ #predict_leaderboard .tbody, #reason_leaderboard .tbody {
19
+ max-height: none !important;
20
+ height: auto !important;
21
+ overflow-x: auto !important;
22
+ overflow-y: hidden !important;
23
+ }
24
+ """
25
+
26
+ # What does your leaderboard evaluate?
27
+ INTRODUCTION_TEXT = """
28
+ **Physical AI Bench (PAI-Bench)** is a comprehensive benchmark suite for evaluating physical AI generation and understanding across diverse scenarios including autonomous vehicles, robotics, industrial spaces, and everyday ego-centric environments.
29
+ """
30
+
31
+ # Which evaluations are you running? how can people reproduce what you have?
32
+ LLM_BENCHMARKS_TEXT = """
33
+ ## How it works
34
+
35
+ This leaderboard tracks model performance across three core dimensions:
36
+
37
+ - **🎨 Generation**: Evaluates world foundation models' ability to predict future states across 1,044 diverse physical scenarios
38
+ - **🔄 Conditional Generation**: Focuses on world model generation with complex control signals, featuring 600 videos across robotic arm operations, autonomous driving, and ego-centric scenes
39
+ - **🧠 Understanding**: Evaluates understanding and reasoning about physical scenes, with 1,214 embodied reasoning scenarios focused on autonomous vehicle actions
40
+
41
+ PAI-Bench covers multiple physical AI domains including autonomous driving, robotics, industrial spaces, physics simulations, human interactions, and common sense reasoning.
42
+
43
+ ### Resources
44
+ - 🌐 [GitHub Repository](https://github.com/SHI-Labs/physical-ai-bench)
45
+ - 📊 [Generation Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-predict)
46
+ - 📊 [Conditional Generation Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-transfer)
47
+ - 📊 [Understanding Dataset](https://huggingface.co/datasets/shi-labs/physical-ai-bench-reason)
48
+
49
+ ## Reproducibility
50
+
51
+ To evaluate your models on PAI-Bench, visit our [GitHub repository](https://github.com/SHI-Labs/physical-ai-bench) for evaluation scripts and detailed instructions.
52
+
53
+ ## Citation
54
+
55
+ If you use Physical AI Bench in your research, please cite:
56
+
57
+ ```bibtex
58
+ @misc{zhou2025paibenchcomprehensivebenchmarkphysical,
59
+ title={PAI-Bench: A Comprehensive Benchmark For Physical AI},
60
+ author={Fengzhe Zhou and Jiannan Huang and Jialuo Li and Deva Ramanan and Humphrey Shi},
61
+ year={2025},
62
+ eprint={2512.01989},
63
+ archivePrefix={arXiv},
64
+ primaryClass={cs.CV},
65
+ url={https://arxiv.org/abs/2512.01989},
66
+ }
67
+ ```
68
+ """
69
+
70
+
71
+ # ============================================================================
72
+ # Model Links Utility
73
+ # ============================================================================
74
+
75
+ def create_model_link(model_name):
76
+ """
77
+ Convert a model name to a markdown link to Hugging Face.
78
+
79
+ Args:
80
+ model_name: Model name in format "org/model-name" or just a plain name
81
+
82
+ Returns:
83
+ Markdown formatted link or original name if format doesn't match
84
+ """
85
+ if not isinstance(model_name, str):
86
+ return model_name
87
+
88
+ if '/' in model_name:
89
+ hf_url = f"https://huggingface.co/{model_name}"
90
+ display_name = model_name.split('/')[-1]
91
+ return f"[{display_name}]({hf_url})"
92
+
93
+ return model_name
94
+
95
+
96
+ # ============================================================================
97
+ # Generation Tab Configuration and Utilities
98
+ # ============================================================================
99
+
100
+ # Expected column order (the CSV should already have this order)
101
+ PREDICT_COLUMN_ORDER = [
102
+ 'model',
103
+ 'Overall',
104
+ 'Domain Score',
105
+ 'Quality Score',
106
+ 'Common Sense',
107
+ 'AV',
108
+ 'Robot',
109
+ 'Industry',
110
+ 'Human',
111
+ 'Physics',
112
+ 'Subject Consistency',
113
+ 'Background Consistency',
114
+ 'Motion Smoothness',
115
+ 'Aesthetic Quality',
116
+ 'Image Quality',
117
+ 'Overall Consistency',
118
+ 'I2V Subject',
119
+ 'I2V Background',
120
+ 'params',
121
+ 'activate_params'
122
+ ]
123
+
124
+ # Columns to hide by default (but still available for filtering/selection)
125
+ PREDICT_HIDDEN_COLUMNS = ['params', 'activate_params']
126
+
127
+ # Semantic/Domain dimensions (for selection button)
128
+ PREDICT_DOMAIN_SCORE_DIMENSIONS = [
129
+ 'Domain Score',
130
+ 'Common Sense',
131
+ 'AV',
132
+ 'Robot',
133
+ 'Industry',
134
+ 'Human',
135
+ 'Physics',
136
+ ]
137
+
138
+ # Quality dimensions (for selection button)
139
+ PREDICT_QUALITY_SCORE_DIMENSIONS = [
140
+ 'Quality Score',
141
+ 'Subject Consistency',
142
+ 'Background Consistency',
143
+ 'Motion Smoothness',
144
+ 'Aesthetic Quality',
145
+ 'Image Quality',
146
+ 'Overall Consistency',
147
+ 'I2V Subject',
148
+ 'I2V Background'
149
+ ]
150
+
151
+ PREDICT_DESELECTED_COLUMNS = ['Domain Score', 'Quality Score']
152
+
153
+ PREDICT_ALL_SELECTED_COLUMNS = [
154
+ 'Domain Score',
155
+ 'Quality Score',
156
+ 'Common Sense',
157
+ 'AV',
158
+ 'Robot',
159
+ 'Industry',
160
+ 'Human',
161
+ 'Physics',
162
+ 'Subject Consistency',
163
+ 'Background Consistency',
164
+ 'Motion Smoothness',
165
+ 'Aesthetic Quality',
166
+ 'Image Quality',
167
+ 'Overall Consistency',
168
+ 'I2V Subject',
169
+ 'I2V Background'
170
+ ]
171
+
172
+ # Columns that can never be deselected
173
+ PREDICT_NEVER_HIDDEN_COLUMNS = ['model', 'Overall']
174
+
175
+ # Columns displayed by default (using renamed column names)
176
+ PREDICT_DEFAULT_DISPLAYED_COLUMNS = PREDICT_NEVER_HIDDEN_COLUMNS + PREDICT_ALL_SELECTED_COLUMNS
177
+
178
+ def load_predict_json(json_path):
179
+ """
180
+ Load generation leaderboard JSON.
181
+
182
+ The JSON should already be pre-processed by generate_predict_leaderboard.py
183
+ with correct column names, ordering, sorting, and separate model/url fields.
184
+ """
185
+ df = pd.read_json(json_path, orient='records')
186
+
187
+ if 'model' in df.columns and 'url' in df.columns:
188
+ def create_link(row):
189
+ if pd.notna(row['url']):
190
+ display_name = row['model'].split('/')[-1] if '/' in row['model'] else row['model']
191
+ return f"[{display_name}]({row['url']})"
192
+ return row['model']
193
+
194
+ df['model'] = df.apply(create_link, axis=1)
195
+ df = df.drop(columns=['url'])
196
+
197
+ # Format numbers to ensure decimal places (1 decimal for numeric columns)
198
+ # Numbers should already be scaled to 0-100 by the generation script
199
+ for col in df.columns:
200
+ if col not in ['model', 'params', 'activate_params'] and pd.api.types.is_numeric_dtype(df[col]):
201
+ df[col] = df[col].apply(lambda x: f"{x:.1f}" if pd.notna(x) else x)
202
+
203
+ return df
204
+
205
+
206
+ def select_predict_domain_score():
207
+ """Return domain score for checkbox selection"""
208
+ return gr.update(value=PREDICT_DOMAIN_SCORE_DIMENSIONS)
209
+
210
+ def select_predict_quality_score():
211
+ """Return quality score for checkbox selection"""
212
+ return gr.update(value=PREDICT_QUALITY_SCORE_DIMENSIONS)
213
+
214
+ def deselect_predict_all():
215
+ """Deselect all dimensions"""
216
+ return gr.update(value=PREDICT_DESELECTED_COLUMNS)
217
+
218
+ def select_predict_all():
219
+ """Select all dimensions"""
220
+ return gr.update(value=PREDICT_ALL_SELECTED_COLUMNS)
221
+
222
+ def on_predict_dimension_selection_change(selected_columns, full_df):
223
+ """Handle dimension selection changes and update the dataframe"""
224
+ # Always include model and Overall columns
225
+ present_columns = ['model', 'Overall']
226
+
227
+ # Add selected columns
228
+ for col in selected_columns:
229
+ if col not in present_columns and col in full_df.columns:
230
+ present_columns.append(col)
231
+
232
+ # Filter dataframe to show only selected columns
233
+ updated_data = full_df[present_columns]
234
+
235
+ # Determine datatypes
236
+ datatypes = []
237
+ for col in present_columns:
238
+ if col == 'model':
239
+ datatypes.append('markdown')
240
+ elif col in ['params', 'activate_params']:
241
+ datatypes.append('number')
242
+ else:
243
+ datatypes.append('str')
244
+
245
+ return gr.update(value=updated_data, datatype=datatypes, headers=present_columns)
246
+
247
+
248
+ def init_predict_leaderboard(dataframe):
249
+ """Initialize the Generation leaderboard with given dataframe"""
250
+ if dataframe is None or dataframe.empty:
251
+ raise ValueError("Leaderboard DataFrame is empty or None.")
252
+
253
+ # Get columns that exist in the dataframe
254
+ available_default_cols = [col for col in PREDICT_DEFAULT_DISPLAYED_COLUMNS if col in dataframe.columns]
255
+
256
+ # Filter dataframe to show only default columns initially
257
+ display_df = dataframe[available_default_cols]
258
+
259
+ # Determine datatypes dynamically
260
+ datatypes = []
261
+ for col in display_df.columns:
262
+ if col == 'model':
263
+ datatypes.append('markdown')
264
+ elif col in ['params', 'activate_params']:
265
+ datatypes.append('number')
266
+ else:
267
+ datatypes.append('str') # All numeric columns are now formatted as strings
268
+
269
+ # Create the UI components
270
+ with gr.Row():
271
+ with gr.Column(scale=1):
272
+ domain_score_btn = gr.Button("Domain Score", size="md")
273
+ quality_score_btn = gr.Button("Quality Score", size="md")
274
+ select_all_btn = gr.Button("Select All", size="md")
275
+ deselect_btn = gr.Button("Deselect All", size="md")
276
+
277
+ with gr.Column(scale=4):
278
+ # Get all dimension columns (exclude model, Overall, scores, and params)
279
+ dimension_choices = [col for col in dataframe.columns
280
+ if col not in PREDICT_NEVER_HIDDEN_COLUMNS + PREDICT_HIDDEN_COLUMNS]
281
+
282
+ checkbox_group = gr.CheckboxGroup(
283
+ choices=dimension_choices,
284
+ value=[col for col in PREDICT_DEFAULT_DISPLAYED_COLUMNS if col in dimension_choices],
285
+ label="Evaluation Dimensions",
286
+ interactive=True,
287
+ )
288
+
289
+ data_component = gr.Dataframe(
290
+ value=display_df,
291
+ headers=list(display_df.columns),
292
+ datatype=datatypes,
293
+ interactive=False,
294
+ visible=True,
295
+ wrap=False,
296
+ column_widths=["320px"] + ["200px"] * (len(display_df.columns) - 1),
297
+ pinned_columns=1,
298
+ elem_id="predict_leaderboard",
299
+ max_height=10000,
300
+ )
301
+
302
+ # Setup event handlers
303
+ domain_score_btn.click(
304
+ select_predict_domain_score,
305
+ inputs=None,
306
+ outputs=[checkbox_group]
307
+ ).then(
308
+ fn=on_predict_dimension_selection_change,
309
+ inputs=[checkbox_group, gr.State(dataframe)],
310
+ outputs=data_component
311
+ )
312
+
313
+ quality_score_btn.click(
314
+ select_predict_quality_score,
315
+ inputs=None,
316
+ outputs=[checkbox_group]
317
+ ).then(
318
+ fn=on_predict_dimension_selection_change,
319
+ inputs=[checkbox_group, gr.State(dataframe)],
320
+ outputs=data_component
321
+ )
322
+
323
+ deselect_btn.click(
324
+ deselect_predict_all,
325
+ inputs=None,
326
+ outputs=[checkbox_group]
327
+ ).then(
328
+ fn=on_predict_dimension_selection_change,
329
+ inputs=[checkbox_group, gr.State(dataframe)],
330
+ outputs=data_component
331
+ )
332
+
333
+ select_all_btn.click(
334
+ select_predict_all,
335
+ inputs=None,
336
+ outputs=[checkbox_group]
337
+ ).then(
338
+ fn=on_predict_dimension_selection_change,
339
+ inputs=[checkbox_group, gr.State(dataframe)],
340
+ outputs=data_component
341
+ )
342
+
343
+ checkbox_group.change(
344
+ fn=on_predict_dimension_selection_change,
345
+ inputs=[checkbox_group, gr.State(dataframe)],
346
+ outputs=data_component
347
+ )
348
+
349
+ return data_component
350
+
351
+
352
+ # ============================================================================
353
+ # Understanding Tab Configuration and Utilities
354
+ # ============================================================================
355
+
356
+ # Column name mapping for display
357
+ REASON_COLUMN_MAPPING = {
358
+ 'Physical world': 'Physics'
359
+ }
360
+
361
+ # Desired column order
362
+ REASON_COLUMN_ORDER = [
363
+ 'model',
364
+ 'Overall',
365
+ 'Common Sense',
366
+ 'Embodied Reasoning',
367
+ 'Space',
368
+ 'Time',
369
+ 'Physics',
370
+ 'BridgeData V2',
371
+ 'RoboVQA',
372
+ 'RoboFail',
373
+ 'Agibot',
374
+ 'HoloAssist',
375
+ 'AV',
376
+ 'params',
377
+ 'activate_params'
378
+ ]
379
+
380
+ # Columns to hide by default (but still available for filtering/selection)
381
+ REASON_HIDDEN_COLUMNS = ['params', 'activate_params']
382
+
383
+ # Reasoning dimensions (for selection button)
384
+ REASON_COMMON_SENSE_DIMENSIONS = [
385
+ 'Common Sense',
386
+ 'Space',
387
+ 'Time',
388
+ 'Physics',
389
+ ]
390
+
391
+ # Domain dimensions (for selection button)
392
+ REASON_EMBODIED_REASONING_DIMENSIONS = [
393
+ 'Embodied Reasoning',
394
+ 'Space',
395
+ 'Time',
396
+ 'Physics',
397
+ 'BridgeData V2',
398
+ 'RoboVQA',
399
+ 'RoboFail',
400
+ 'Agibot',
401
+ 'HoloAssist',
402
+ 'AV',
403
+ ]
404
+
405
+ REASON_DESELECTED_COLUMNS = [
406
+ 'Common Sense',
407
+ 'Embodied Reasoning',
408
+ ]
409
+
410
+ REASON_ALL_SELECTED_COLUMNS = [
411
+ 'Common Sense',
412
+ 'Embodied Reasoning',
413
+ 'Space',
414
+ 'Time',
415
+ 'Physics',
416
+ 'BridgeData V2',
417
+ 'RoboVQA',
418
+ 'RoboFail',
419
+ 'Agibot',
420
+ 'HoloAssist',
421
+ 'AV',
422
+ ]
423
+
424
+ # Columns that can never be deselected
425
+ REASON_NEVER_HIDDEN_COLUMNS = ['model', 'Overall']
426
+
427
+ # Columns displayed by default (using renamed column names)
428
+ REASON_DEFAULT_DISPLAYED_COLUMNS = REASON_NEVER_HIDDEN_COLUMNS + REASON_ALL_SELECTED_COLUMNS
429
+
430
+
431
+ def load_reason_csv(csv_path):
432
+ """Load CSV and apply column mapping and ordering"""
433
+ df = pd.read_csv(csv_path)
434
+
435
+ # Apply column mapping
436
+ df = df.rename(columns=REASON_COLUMN_MAPPING)
437
+
438
+ # Reorder columns (only keep columns that exist in the dataframe)
439
+ available_cols = [col for col in REASON_COLUMN_ORDER if col in df.columns]
440
+ df = df[available_cols]
441
+
442
+ # Convert model names to HuggingFace links
443
+ if 'model' in df.columns:
444
+ df['model'] = df['model'].apply(create_model_link)
445
+
446
+ # Format numbers to ensure decimal places (1 decimal for integers)
447
+ for col in df.columns:
448
+ if col not in ['model', 'params', 'activate_params'] and pd.api.types.is_numeric_dtype(df[col]):
449
+ df[col] = df[col].apply(lambda x: f"{x:.1f}" if pd.notna(x) else x)
450
+
451
+ return df
452
+
453
+
454
+ def select_reason_common_sense_dimensions():
455
+ """Return reasoning dimensions for checkbox selection"""
456
+ return gr.update(value=REASON_COMMON_SENSE_DIMENSIONS)
457
+
458
+
459
+ def select_reason_embodied_reasoning_dimensions():
460
+ """Return domain dimensions for checkbox selection"""
461
+ return gr.update(value=REASON_EMBODIED_REASONING_DIMENSIONS)
462
+
463
+
464
+ def deselect_reason_all():
465
+ """Deselect all dimensions"""
466
+ return gr.update(value=REASON_DESELECTED_COLUMNS)
467
+
468
+
469
+ def select_reason_all():
470
+ """Select all dimensions"""
471
+ return gr.update(value=REASON_ALL_SELECTED_COLUMNS)
472
+
473
+
474
+ def on_reason_dimension_selection_change(selected_columns, full_df):
475
+ """Handle dimension selection changes and update the dataframe"""
476
+ # Always include model and Overall columns
477
+ present_columns = ['model', 'Overall']
478
+
479
+ # Add selected columns
480
+ for col in selected_columns:
481
+ if col not in present_columns and col in full_df.columns:
482
+ present_columns.append(col)
483
+
484
+ # Filter dataframe to show only selected columns
485
+ updated_data = full_df[present_columns]
486
+
487
+ # Determine datatypes
488
+ datatypes = []
489
+ for col in present_columns:
490
+ if col == 'model':
491
+ datatypes.append('markdown')
492
+ elif col in ['params', 'activate_params']:
493
+ datatypes.append('number')
494
+ else:
495
+ datatypes.append('str')
496
+
497
+ return gr.update(value=updated_data, datatype=datatypes, headers=present_columns)
498
+
499
+
500
+ def init_reason_leaderboard(dataframe):
501
+ """Initialize the Understanding leaderboard with given dataframe"""
502
+ if dataframe is None or dataframe.empty:
503
+ raise ValueError("Leaderboard DataFrame is empty or None.")
504
+
505
+ # Get columns that exist in the dataframe
506
+ available_default_cols = [col for col in REASON_DEFAULT_DISPLAYED_COLUMNS if col in dataframe.columns]
507
+
508
+ # Filter dataframe to show only default columns initially
509
+ display_df = dataframe[available_default_cols]
510
+
511
+ # Determine datatypes dynamically
512
+ datatypes = []
513
+ for col in display_df.columns:
514
+ if col == 'model':
515
+ datatypes.append('markdown')
516
+ elif col in ['params', 'activate_params']:
517
+ datatypes.append('number')
518
+ else:
519
+ datatypes.append('str') # All numeric columns are now formatted as strings
520
+
521
+ # Create the UI components
522
+ with gr.Row():
523
+ with gr.Column(scale=1):
524
+ common_sense_btn = gr.Button("Common Sense", size="md")
525
+ embodied_reasoning_btn = gr.Button("Embodied Reasoning", size="md")
526
+ select_all_btn = gr.Button("Select All", size="md")
527
+ deselect_btn = gr.Button("Deselect All", size="md")
528
+
529
+ with gr.Column(scale=4):
530
+ # Get all dimension columns (exclude model, Overall, and params)
531
+ dimension_choices = [col for col in dataframe.columns
532
+ if col not in REASON_NEVER_HIDDEN_COLUMNS + REASON_HIDDEN_COLUMNS]
533
+
534
+ checkbox_group = gr.CheckboxGroup(
535
+ choices=dimension_choices,
536
+ value=[col for col in REASON_DEFAULT_DISPLAYED_COLUMNS if col in dimension_choices],
537
+ label="Evaluation Dimensions",
538
+ interactive=True,
539
+ )
540
+
541
+ data_component = gr.Dataframe(
542
+ value=display_df,
543
+ headers=list(display_df.columns),
544
+ datatype=datatypes,
545
+ interactive=False,
546
+ visible=True,
547
+ wrap=False, # Allow horizontal scrolling, don't wrap content
548
+ column_widths=["320px"] + ["200px"] * (len(display_df.columns) - 1),
549
+ pinned_columns=1,
550
+ elem_id="reason_leaderboard",
551
+ max_height=10000,
552
+ )
553
+
554
+ # Setup event handlers
555
+ common_sense_btn.click(
556
+ select_reason_common_sense_dimensions,
557
+ inputs=None,
558
+ outputs=[checkbox_group]
559
+ ).then(
560
+ fn=on_reason_dimension_selection_change,
561
+ inputs=[checkbox_group, gr.State(dataframe)],
562
+ outputs=data_component
563
+ )
564
+
565
+ embodied_reasoning_btn.click(
566
+ select_reason_embodied_reasoning_dimensions,
567
+ inputs=None,
568
+ outputs=[checkbox_group]
569
+ ).then(
570
+ fn=on_reason_dimension_selection_change,
571
+ inputs=[checkbox_group, gr.State(dataframe)],
572
+ outputs=data_component
573
+ )
574
+
575
+ deselect_btn.click(
576
+ deselect_reason_all,
577
+ inputs=None,
578
+ outputs=[checkbox_group]
579
+ ).then(
580
+ fn=on_reason_dimension_selection_change,
581
+ inputs=[checkbox_group, gr.State(dataframe)],
582
+ outputs=data_component
583
+ )
584
+
585
+ select_all_btn.click(
586
+ select_reason_all,
587
+ inputs=None,
588
+ outputs=[checkbox_group]
589
+ ).then(
590
+ fn=on_reason_dimension_selection_change,
591
+ inputs=[checkbox_group, gr.State(dataframe)],
592
+ outputs=data_component
593
+ )
594
+
595
+ checkbox_group.change(
596
+ fn=on_reason_dimension_selection_change,
597
+ inputs=[checkbox_group, gr.State(dataframe)],
598
+ outputs=data_component
599
+ )
600
+
601
+ return data_component
602
+
603
+
604
+ # ============================================================================
605
+ # Main Application
606
+ # ============================================================================
607
+
608
+ demo = gr.Blocks()
609
+ with demo:
610
+ gr.HTML(TITLE)
611
+ gr.Markdown(INTRODUCTION_TEXT, elem_classes="markdown-text")
612
+
613
+ with gr.Tabs(elem_classes="tab-buttons") as tabs:
614
+ with gr.TabItem("🎨 Generation", elem_id="predict-tab", id=0):
615
+ predict_df = load_predict_json("data/predict-leaderboard.json")
616
+ predict_leaderboard = init_predict_leaderboard(predict_df)
617
+
618
+ with gr.TabItem("🔄 Conditional Generation", elem_id="transfer-tab", id=1):
619
+ gr.Markdown("## Coming Soon", elem_classes="markdown-text")
620
+
621
+ with gr.TabItem("🧠 Understanding", elem_id="reason-tab", id=2):
622
+ reason_df = load_reason_csv("data/reason-leaderboard.csv")
623
+ reason_leaderboard = init_reason_leaderboard(reason_df)
624
+
625
+ with gr.TabItem("ℹ️ About", elem_id="about-tab", id=3):
626
+ gr.Markdown(LLM_BENCHMARKS_TEXT, elem_classes="markdown-text")
627
+
628
+ demo.launch(css=CSS)
data/predict-leaderboard.json ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "model":"Veo-3",
4
+ "url":"https:\/\/deepmind.google\/models\/veo",
5
+ "Overall":82.1,
6
+ "Domain Score":86.7,
7
+ "Quality Score":77.6,
8
+ "Common Sense":94.4,
9
+ "AV":68.7,
10
+ "Robot":86.9,
11
+ "Industry":89.7,
12
+ "Human":84.4,
13
+ "Physics":91.6,
14
+ "Subject Consistency":91.4,
15
+ "Background Consistency":93.1,
16
+ "Motion Smoothness":99.2,
17
+ "Aesthetic Quality":51.9,
18
+ "Image Quality":69.8,
19
+ "Overall Consistency":21.7,
20
+ "I2V Subject":97.0,
21
+ "I2V Background":96.9,
22
+ "params":null,
23
+ "activate_params":null
24
+ },
25
+ {
26
+ "model":"nvidia\/Cosmos-Predict2.5-2B",
27
+ "url":"https:\/\/huggingface.co\/nvidia\/Cosmos-Predict2.5-2B",
28
+ "Overall":81.0,
29
+ "Domain Score":84.0,
30
+ "Quality Score":77.9,
31
+ "Common Sense":94.1,
32
+ "AV":66.1,
33
+ "Robot":80.8,
34
+ "Industry":87.8,
35
+ "Human":81.4,
36
+ "Physics":93.9,
37
+ "Subject Consistency":92.5,
38
+ "Background Consistency":94.2,
39
+ "Motion Smoothness":99.1,
40
+ "Aesthetic Quality":52.4,
41
+ "Image Quality":70.8,
42
+ "Overall Consistency":20.1,
43
+ "I2V Subject":96.6,
44
+ "I2V Background":97.4,
45
+ "params":2.0,
46
+ "activate_params":2.0
47
+ },
48
+ {
49
+ "model":"Wan-AI\/Wan2.2-I2V-A14B",
50
+ "url":"https:\/\/huggingface.co\/Wan-AI\/Wan2.2-I2V-A14B",
51
+ "Overall":80.6,
52
+ "Domain Score":84.1,
53
+ "Quality Score":77.2,
54
+ "Common Sense":93.2,
55
+ "AV":66.3,
56
+ "Robot":81.7,
57
+ "Industry":89.2,
58
+ "Human":82.1,
59
+ "Physics":91.8,
60
+ "Subject Consistency":91.6,
61
+ "Background Consistency":93.7,
62
+ "Motion Smoothness":98.3,
63
+ "Aesthetic Quality":51.2,
64
+ "Image Quality":69.6,
65
+ "Overall Consistency":20.4,
66
+ "I2V Subject":96.0,
67
+ "I2V Background":96.6,
68
+ "params":14.0,
69
+ "activate_params":14.0
70
+ },
71
+ {
72
+ "model":"Wan-AI\/Wan2.2-TI2V-5B",
73
+ "url":"https:\/\/huggingface.co\/Wan-AI\/Wan2.2-TI2V-5B",
74
+ "Overall":80.4,
75
+ "Domain Score":83.4,
76
+ "Quality Score":77.4,
77
+ "Common Sense":93.1,
78
+ "AV":65.2,
79
+ "Robot":79.3,
80
+ "Industry":88.4,
81
+ "Human":83.0,
82
+ "Physics":91.5,
83
+ "Subject Consistency":91.8,
84
+ "Background Consistency":93.7,
85
+ "Motion Smoothness":98.8,
86
+ "Aesthetic Quality":51.9,
87
+ "Image Quality":69.9,
88
+ "Overall Consistency":20.3,
89
+ "I2V Subject":95.9,
90
+ "I2V Background":96.7,
91
+ "params":5.0,
92
+ "activate_params":5.0
93
+ },
94
+ {
95
+ "model":"Wan-AI\/Wan2.1-I2V-14B-720P",
96
+ "url":"https:\/\/huggingface.co\/Wan-AI\/Wan2.1-I2V-14B-720P",
97
+ "Overall":79.7,
98
+ "Domain Score":82.7,
99
+ "Quality Score":76.8,
100
+ "Common Sense":90.6,
101
+ "AV":66.9,
102
+ "Robot":80.1,
103
+ "Industry":89.7,
104
+ "Human":80.1,
105
+ "Physics":88.7,
106
+ "Subject Consistency":90.0,
107
+ "Background Consistency":93.1,
108
+ "Motion Smoothness":98.1,
109
+ "Aesthetic Quality":51.5,
110
+ "Image Quality":70.1,
111
+ "Overall Consistency":20.4,
112
+ "I2V Subject":95.2,
113
+ "I2V Background":96.0,
114
+ "params":14.0,
115
+ "activate_params":14.0
116
+ },
117
+ {
118
+ "model":"MAGI\/MAGI-1-24B",
119
+ "url":"https:\/\/huggingface.co\/sand-ai\/MAGI-1",
120
+ "Overall":78.5,
121
+ "Domain Score":80.5,
122
+ "Quality Score":76.5,
123
+ "Common Sense":90.6,
124
+ "AV":61.8,
125
+ "Robot":73.5,
126
+ "Industry":84.1,
127
+ "Human":79.8,
128
+ "Physics":87.7,
129
+ "Subject Consistency":90.0,
130
+ "Background Consistency":92.4,
131
+ "Motion Smoothness":99.0,
132
+ "Aesthetic Quality":50.2,
133
+ "Image Quality":64.2,
134
+ "Overall Consistency":21.4,
135
+ "I2V Subject":96.8,
136
+ "I2V Background":97.9,
137
+ "params":24.0,
138
+ "activate_params":24.0
139
+ },
140
+ {
141
+ "model":"THUDM\/CogVideoX1.5-5B-I2V",
142
+ "url":"https:\/\/huggingface.co\/THUDM\/CogVideoX1.5-5B-I2V",
143
+ "Overall":78.3,
144
+ "Domain Score":80.1,
145
+ "Quality Score":76.6,
146
+ "Common Sense":89.1,
147
+ "AV":59.7,
148
+ "Robot":73.0,
149
+ "Industry":84.4,
150
+ "Human":79.2,
151
+ "Physics":91.8,
152
+ "Subject Consistency":91.6,
153
+ "Background Consistency":93.9,
154
+ "Motion Smoothness":98.5,
155
+ "Aesthetic Quality":50.0,
156
+ "Image Quality":66.5,
157
+ "Overall Consistency":21.2,
158
+ "I2V Subject":95.0,
159
+ "I2V Background":96.1,
160
+ "params":5.0,
161
+ "activate_params":5.0
162
+ },
163
+ {
164
+ "model":"THUDM\/CogVideoX-5B-I2V",
165
+ "url":"https:\/\/huggingface.co\/THUDM\/CogVideoX-5B-I2V",
166
+ "Overall":77.9,
167
+ "Domain Score":79.5,
168
+ "Quality Score":76.3,
169
+ "Common Sense":87.7,
170
+ "AV":58.0,
171
+ "Robot":74.0,
172
+ "Industry":84.4,
173
+ "Human":79.0,
174
+ "Physics":90.2,
175
+ "Subject Consistency":91.4,
176
+ "Background Consistency":93.4,
177
+ "Motion Smoothness":98.0,
178
+ "Aesthetic Quality":51.2,
179
+ "Image Quality":64.6,
180
+ "Overall Consistency":21.3,
181
+ "I2V Subject":94.1,
182
+ "I2V Background":95.9,
183
+ "params":5.0,
184
+ "activate_params":5.0
185
+ },
186
+ {
187
+ "model":"Lightricks\/LTX-Video-13B",
188
+ "url":"https:\/\/huggingface.co\/Lightricks\/LTX-Video",
189
+ "Overall":77.9,
190
+ "Domain Score":78.4,
191
+ "Quality Score":77.4,
192
+ "Common Sense":88.9,
193
+ "AV":55.3,
194
+ "Robot":70.1,
195
+ "Industry":82.7,
196
+ "Human":78.3,
197
+ "Physics":90.1,
198
+ "Subject Consistency":90.6,
199
+ "Background Consistency":93.5,
200
+ "Motion Smoothness":99.0,
201
+ "Aesthetic Quality":53.5,
202
+ "Image Quality":69.5,
203
+ "Overall Consistency":21.4,
204
+ "I2V Subject":95.7,
205
+ "I2V Background":96.0,
206
+ "params":13.0,
207
+ "activate_params":13.0
208
+ },
209
+ {
210
+ "model":"Tencent\/HunyuanVideo-I2V",
211
+ "url":"https:\/\/huggingface.co\/Tencent\/HunyuanVideo-I2V",
212
+ "Overall":77.4,
213
+ "Domain Score":76.8,
214
+ "Quality Score":78.0,
215
+ "Common Sense":87.4,
216
+ "AV":56.3,
217
+ "Robot":67.7,
218
+ "Industry":83.0,
219
+ "Human":75.5,
220
+ "Physics":88.2,
221
+ "Subject Consistency":94.3,
222
+ "Background Consistency":95.3,
223
+ "Motion Smoothness":99.5,
224
+ "Aesthetic Quality":52.1,
225
+ "Image Quality":65.2,
226
+ "Overall Consistency":21.5,
227
+ "I2V Subject":98.6,
228
+ "I2V Background":97.6,
229
+ "params":null,
230
+ "activate_params":null
231
+ },
232
+ {
233
+ "model":"MAGI\/MAGI-1-4.5B",
234
+ "url":"https:\/\/huggingface.co\/sand-ai\/MAGI-1",
235
+ "Overall":76.9,
236
+ "Domain Score":77.4,
237
+ "Quality Score":76.3,
238
+ "Common Sense":87.5,
239
+ "AV":56.3,
240
+ "Robot":71.6,
241
+ "Industry":79.8,
242
+ "Human":76.0,
243
+ "Physics":88.9,
244
+ "Subject Consistency":92.1,
245
+ "Background Consistency":93.3,
246
+ "Motion Smoothness":99.0,
247
+ "Aesthetic Quality":50.4,
248
+ "Image Quality":61.8,
249
+ "Overall Consistency":21.6,
250
+ "I2V Subject":94.5,
251
+ "I2V Background":98.1,
252
+ "params":4.5,
253
+ "activate_params":4.5
254
+ },
255
+ {
256
+ "model":"Lightricks\/LTX-Video-2B",
257
+ "url":"https:\/\/huggingface.co\/Lightricks\/LTX-Video",
258
+ "Overall":76.9,
259
+ "Domain Score":76.6,
260
+ "Quality Score":77.1,
261
+ "Common Sense":87.3,
262
+ "AV":53.6,
263
+ "Robot":67.1,
264
+ "Industry":81.5,
265
+ "Human":77.1,
266
+ "Physics":87.6,
267
+ "Subject Consistency":89.2,
268
+ "Background Consistency":92.7,
269
+ "Motion Smoothness":98.7,
270
+ "Aesthetic Quality":53.2,
271
+ "Image Quality":71.3,
272
+ "Overall Consistency":21.1,
273
+ "I2V Subject":95.0,
274
+ "I2V Background":95.9,
275
+ "params":2.0,
276
+ "activate_params":2.0
277
+ },
278
+ {
279
+ "model":"Doubiiu\/DynamiCrafter_1024",
280
+ "url":"https:\/\/huggingface.co\/Doubiiu\/DynamiCrafter_1024",
281
+ "Overall":69.7,
282
+ "Domain Score":65.6,
283
+ "Quality Score":73.7,
284
+ "Common Sense":75.2,
285
+ "AV":43.4,
286
+ "Robot":55.0,
287
+ "Industry":72.5,
288
+ "Human":64.1,
289
+ "Physics":83.8,
290
+ "Subject Consistency":91.1,
291
+ "Background Consistency":92.5,
292
+ "Motion Smoothness":94.9,
293
+ "Aesthetic Quality":51.5,
294
+ "Image Quality":68.0,
295
+ "Overall Consistency":21.2,
296
+ "I2V Subject":84.5,
297
+ "I2V Background":86.2,
298
+ "params":null,
299
+ "activate_params":null
300
+ }
301
+ ]
data/reason-leaderboard.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model,Overall,Common Sense,Embodied Reasoning,Space,Time,Physics,BridgeData V2,RoboVQA,RoboFail,Agibot,HoloAssist,AV,params,activate_params
2
+ GPT-5,70.0,72.7,67.4,67.5,72.8,74.3,53.0,90.9,68.0,55.0,73.0,62.0,,
3
+ Qwen/Qwen3-VL-235B-A22B-Instruct,64.8,65.2,64.4,56.2,69.8,62.4,42.0,93.6,71.0,45.0,76.0,56.0,235.0,22.0
4
+ Qwen/Qwen3-VL-30B-A3B-Instruct,60.6,59.9,61.3,52.5,62.1,59.7,36.0,89.1,67.0,43.0,81.0,49.0,30.0,3.0
5
+ Qwen/Qwen2.5-VL-72B-Instruct,56.8,57.9,55.7,56.2,62.8,52.2,35.0,90.9,73.0,35.0,58.0,39.0,72.0,72.0
6
+ OpenGVLab/InternVL3_5-38B,55.8,55.8,55.7,57.5,60.4,49.1,36.0,81.8,67.0,44.0,71.0,32.0,38.0,38.0
7
+ nvidia/Cosmos-Reason1-7B,54.3,50.7,57.9,57.5,53.7,44.2,41.0,91.8,65.0,42.0,57.0,47.0,7.0,7.0
8
+ GPT-4o,53.7,56.3,51.1,55.0,55.0,58.4,40.0,56.4,65.0,37.0,65.0,43.0,,
9
+ Qwen/Qwen2.5-VL-32B-Instruct,51.9,53.8,50.0,50.0,61.1,45.6,32.0,90.0,52.0,34.0,55.0,33.0,32.0,32.0
10
+ OpenGVLab/InternVL3_5-8B,50.5,50.5,50.5,48.8,54.7,45.6,32.0,77.3,66.0,38.0,49.0,38.0,8.0,8.0
11
+ Qwen/Qwen2.5-VL-7B-Instruct,50.3,47.7,53.0,47.5,55.4,37.6,33.0,83.6,62.0,44.0,47.0,45.0,7.0,7.0
12
+ OpenGVLab/InternVL3_5-14B,49.7,50.3,49.0,52.5,52.0,47.3,26.0,80.0,67.0,28.0,54.0,36.0,14.0,14.0
13
+ OpenGVLab/InternVL3_5-30B-A3B,49.5,49.5,49.5,47.5,54.4,43.8,37.0,78.2,60.0,27.0,55.0,37.0,30.0,3.0
14
+ Qwen/Qwen2.5-VL-3B-Instruct,48.1,47.4,48.9,47.5,50.7,42.9,31.0,82.7,63.0,36.0,48.0,29.0,3.0,3.0
15
+ zai-org/GLM-4.5V,45.5,46.0,44.9,46.2,50.7,39.8,26.0,83.6,69.0,25.0,24.0,38.0,,
inspect_gradio.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import inspect
3
+
4
+ with open("signature.txt", "w") as f:
5
+ f.write(str(inspect.signature(gr.Dataframe.__init__)))
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio
2
+ pandas
signature.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ (self, value: 'pd.DataFrame | Styler | np.ndarray | pl.DataFrame | list | list[list] | dict | str | Callable | None' = None, *, headers: 'list[str] | None' = None, row_count: 'int | None' = None, row_limits: 'tuple[int | None, int | None] | None' = None, col_count: 'None' = None, column_count: 'int | None' = None, column_limits: 'tuple[int | None, int | None] | None' = None, datatype: "Literal['str', 'number', 'bool', 'date', 'markdown', 'html', 'image', 'auto'] | Sequence[Literal['str', 'number', 'bool', 'date', 'markdown', 'html']]" = 'str', type: "Literal['pandas', 'numpy', 'array', 'polars']" = 'pandas', latex_delimiters: 'list[dict[str, str | bool]] | None' = None, label: 'str | I18nData | None' = None, show_label: 'bool | None' = None, every: 'Timer | float | None' = None, inputs: 'Component | Sequence[Component] | set[Component] | None' = None, max_height: 'int | str' = 500, scale: 'int | None' = None, min_width: 'int' = 160, interactive: 'bool | None' = None, visible: "bool | Literal['hidden']" = True, elem_id: 'str | None' = None, elem_classes: 'list[str] | str | None' = None, render: 'bool' = True, key: 'int | str | tuple[int | str, ...] | None' = None, preserved_by_key: 'list[str] | str | None' = 'value', wrap: 'bool' = False, line_breaks: 'bool' = True, column_widths: 'list[str | int] | None' = None, buttons: "list[Literal['fullscreen', 'copy']] | None" = None, show_row_numbers: 'bool' = False, max_chars: 'int | None' = None, show_search: "Literal['none', 'search', 'filter']" = 'none', pinned_columns: 'int | None' = None, static_columns: 'list[int] | None' = None)