Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 712, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 731, in finalize
                  self._build_writer(self.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

results
dict
groups
dict
group_subtasks
dict
configs
dict
versions
dict
n-shot
dict
higher_is_better
dict
n-samples
dict
config
dict
git_hash
string
date
float64
pretty_env_info
string
transformers_version
string
lm_eval_version
string
upper_git_hash
null
tokenizer_pad_token
list
tokenizer_eos_token
list
tokenizer_bos_token
list
eot_token_id
int64
max_length
int64
task_hashes
dict
model_source
string
model_name
string
model_name_sanitized
string
system_instruction
null
system_instruction_sha
null
fewshot_as_multiturn
bool
chat_template
string
chat_template_sha
string
start_time
float64
end_time
float64
total_evaluation_time_seconds
string
{"ifeval":{"alias":"ifeval","prompt_level_strict_acc,none":0.4436229205175601,"prompt_level_strict_a(...TRUNCATED)
{"mmlu":{"acc,none":0.519860424542018,"acc_stderr,none":0.0036571951950868506,"alias":"mmlu"},"mmlu_(...TRUNCATED)
{"ifeval":[],"mmlu_humanities":["mmlu_philosophy","mmlu_high_school_us_history","mmlu_logical_fallac(...TRUNCATED)
{"ifeval":{"task":"ifeval","dataset_path":"google/IFEval","test_split":"train","doc_to_text":"prompt(...TRUNCATED)
{"ifeval":4.0,"mmlu":2,"mmlu_abstract_algebra":1.0,"mmlu_anatomy":1.0,"mmlu_astronomy":1.0,"mmlu_bus(...TRUNCATED)
{"ifeval":0,"mmlu_abstract_algebra":0,"mmlu_anatomy":0,"mmlu_astronomy":0,"mmlu_business_ethics":0,"(...TRUNCATED)
{"ifeval":{"prompt_level_strict_acc":true,"inst_level_strict_acc":true,"prompt_level_loose_acc":true(...TRUNCATED)
{"mmlu_college_mathematics":{"original":100,"effective":100},"mmlu_college_computer_science":{"origi(...TRUNCATED)
{"model":"vllm","model_args":"pretrained=/home/dgxuser/workspace/Mango/Lora-merge/output-ckpt-100,to(...TRUNCATED)
0563daa3
1,762,826,535.492472
"PyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to(...TRUNCATED)
4.57.1
0.4.9.1
null
[ "<pad>", "3" ]
[ "<|assistant_end|>", "68" ]
[ "<s>", "1" ]
68
8,192
{}
vllm
/home/dgxuser/workspace/Mango/Lora-merge/output-ckpt-100
__home__dgxuser__workspace__Mango__Lora-merge__output-ckpt-100
null
null
false
"{%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}\n {%- if pa(...TRUNCATED)
00067a2e29980bd17baa6a745a54556295f9fdda712d41f9c9a886376988e363
1,842,003.557097
1,842,431.021378
427.4642807790078
{"ifeval":{"alias":"ifeval","prompt_level_strict_acc,none":0.6820702402957486,"prompt_level_strict_a(...TRUNCATED)
{"mmlu":{"acc,none":0.5776679267228846,"acc_stderr,none":0.0035489115088736087,"alias":"mmlu"},"mmlu(...TRUNCATED)
{"ifeval":[],"mmlu_humanities":["mmlu_philosophy","mmlu_high_school_us_history","mmlu_logical_fallac(...TRUNCATED)
{"ifeval":{"task":"ifeval","dataset_path":"google/IFEval","test_split":"train","doc_to_text":"prompt(...TRUNCATED)
{"ifeval":4.0,"mmlu":2,"mmlu_abstract_algebra":1.0,"mmlu_anatomy":1.0,"mmlu_astronomy":1.0,"mmlu_bus(...TRUNCATED)
{"ifeval":0,"mmlu_abstract_algebra":0,"mmlu_anatomy":0,"mmlu_astronomy":0,"mmlu_business_ethics":0,"(...TRUNCATED)
{"ifeval":{"prompt_level_strict_acc":true,"inst_level_strict_acc":true,"prompt_level_loose_acc":true(...TRUNCATED)
{"mmlu_college_mathematics":{"original":100,"effective":100},"mmlu_college_computer_science":{"origi(...TRUNCATED)
{"model":"vllm","model_args":"pretrained=swiss-ai/Apertus-8B-Instruct-2509,tokenizer=swiss-ai/Apertu(...TRUNCATED)
0563daa3
1,762,827,025.12484
"PyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to(...TRUNCATED)
4.57.1
0.4.9.1
null
[ "<pad>", "3" ]
[ "<|assistant_end|>", "68" ]
[ "<s>", "1" ]
68
8,192
{}
vllm
swiss-ai/Apertus-8B-Instruct-2509
swiss-ai__Apertus-8B-Instruct-2509
null
null
false
"{%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}\n {%- if pa(...TRUNCATED)
00067a2e29980bd17baa6a745a54556295f9fdda712d41f9c9a886376988e363
1,842,493.192497
1,842,956.111627
462.9191297129728

No dataset card yet

Downloads last month
31