Datasets:
metadata
license: cc-by-4.0
task_categories:
- graph-ml
tags:
- physics learning
- geometry learning
dataset_info:
features:
- name: Base_2_3/Zone/Elements_TRI_3/ElementConnectivity
list: int64
- name: Base_2_3/Zone/GridCoordinates/CoordinateX
list: float32
- name: Base_2_3/Zone/GridCoordinates/CoordinateY
list: float32
- name: Base_2_3/Zone/GridCoordinates/CoordinateZ
list: float32
- name: Base_2_3/Zone/VertexFields/pressure
list: float32
splits:
- name: train
num_bytes: 2294280
num_examples: 10
- name: test
num_bytes: 2294280
num_examples: 10
download_size: 2231859
dataset_size: 4588560
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
legal:
owner: NeuralOperator (https://zenodo.org/records/13993629)
license: cc-by-4.0
data_production:
physics: CFD
type: simulation
script: Converted to PLAID format for standardized access; no changes to data content.
num_samples:
train: 10
test: 10
storage_backend: hf_datasets
plaid:
version: 0.1.13.dev1+gb350f274a
This dataset was generated with plaid, we refer to this documentation for additional details on how to extract data from plaid_sample objects.
The simplest way to use this dataset is to first download it:
from plaid.storage import download_from_hub
repo_id = "channel/dataset"
local_folder = "downloaded_dataset"
download_from_hub(repo_id, local_folder)
Then, to iterate over the dataset and instantiate samples:
from plaid.storage import init_from_disk
local_folder = "downloaded_dataset"
split_name = "train"
datasetdict, converterdict = init_from_disk(local_folder)
dataset = datasetdict[split]
converter = converterdict[split]
for i in range(len(dataset)):
plaid_sample = converter.to_plaid(dataset, i)
It is possible to stream the data directly:
from plaid.storage import init_streaming_from_hub
repo_id = "channel/dataset"
datasetdict, converterdict = init_streaming_from_hub(repo_id)
dataset = datasetdict[split]
converter = converterdict[split]
for sample_raw in dataset:
plaid_sample = converter.sample_to_plaid(sample_raw)
Plaid samples' features can be retrieved like the following:
from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)
# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)
pb_def = pb_defs[0]
plaid_sample = ... # use a method from above to instantiate a plaid sample
for t in plaid_sample.get_all_time_values():
for path in pb_def.get_in_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
for path in pb_def.get_out_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
For those familiar with HF's datasets library, raw data can be retrieved without using the plaid library:
from datasets import load_dataset
repo_id = "channel/dataset"
datasetdict = load_dataset(repo_id)
for split_name, dataset in datasetdict.items():
for raw_sample in dataset:
for feat_name in dataset.column_names:
feature = raw_sample[feat_name]
Notice that raw data refers to the variable features only, with a specific encoding for time variable features.