klamike commited on
Commit
a1b1abb
·
verified ·
1 Parent(s): 49883b5

Convert dataset to Parquet (part 00005-of-00006) (#6)

Browse files

- Convert dataset to Parquet (part 00005-of-00006) (a1aceb3b288994ed79c1e7823ebfc7dabda67517)
- Delete loading script (56ec5f2725cb7ab4e2d6f2220f43e753511e0b6a)
- Delete data file (1dd9ea7a781ef6cb9486b64aeba2fba16acbe045)
- Delete data file (c761dd22b540df30e8a648226db1499da67fd05b)
- Delete data file (1a84f01fe66b883a454cd7636c9bcec6566c5835)
- Delete data file (0f01e6ff5b95375b432d8cac67916fffeb635a57)
- Delete data file (e464d94a337246b13bb47dd01d854bd3c2b69cdf)
- Delete data file (9a6a09285375588ea44a66e59cec07827d17cab6)
- Delete data file (ba147abbb3d090e3b48387420b28268814035d19)
- Delete data file (9b92048770cc2920c4c0f4f777a66c9b559a5f46)
- Delete data file (49bce2135f57bc56892e8904752fdfec3a641818)
- Delete data file (00dba4a2f7486a2568e6bd20abfbea5d119bee17)
- Delete data file (d3182e8a0d2e5b24cddaa9f746a6f72563cbcab0)
- Delete data file (275c08c62d7de0c4853ff724b3c4b24872633b5d)
- Delete data file (e4d7eceb01a192113dc738818345fb8318678f0f)
- Delete data file (89757e21106bdea25f6cb117d29d64e78b3a2f37)
- Delete data file (cbc18aa38a424335b650ccfbf975f6990095c862)
- Delete data file (5e0bbbfa110bbe1921d22f519e9f0df0aed710a3)
- Delete data file (80b1299a2489574d74b939c96bfdaf892ade3fa0)
- Delete data file (4257a262dd63109786ab69f2a619b8bf1c6f75af)
- Delete data file (28821bb209b1e6e2f2d18c06da0b5974d38c9c7e)
- Delete data file (51fe5237d101e542d8a325f11bc0ac418915f0e6)
- Delete data file (1bd3a3bf8ef881cb5fee9603e8570aa21c71c805)
- Delete data file (3add4fe3723856c263b7259bc13e9a51285c1eef)
- Delete data file (ed732844ec7ea3f150f69761060930b72bbd0898)
- Delete data file (4230c4d81ecec1760c7b7d7fc932c50306549caf)
- Delete data file (3c61183e3e3342711631403ea157dcd445c319c2)
- Delete data file (05a1df9cd07d0aff3485760b4f3693d65cd62d7c)
- Delete data file (ab43def2fd1189483066c13c286f6f908d817a4f)
- Delete data file (04ce3571b8df5855e3a365f5f285801781584e42)
- Delete data file (d570f0a1691cd8fbb974f0aea9be253a16039a41)
- Delete data file (041b5fb176364e35c857150de7f8d5ae4497dc52)
- Delete data file (865165a819753608d2b35af258a068f1c2876140)
- Delete data file (8d1608aeb8a155dcc73d9968a691cd216c04fc83)

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. infeasible/ACOPF/meta.h5.gz → 57_ieee/test-00024-of-00057.parquet +2 -2
  2. case.json.gz → 57_ieee/test-00025-of-00057.parquet +2 -2
  3. infeasible/ACOPF/primal.h5.gz → 57_ieee/test-00026-of-00057.parquet +2 -2
  4. infeasible/ACOPF/dual.h5.gz → 57_ieee/test-00027-of-00057.parquet +2 -2
  5. 57_ieee/test-00028-of-00057.parquet +3 -0
  6. 57_ieee/test-00029-of-00057.parquet +3 -0
  7. 57_ieee/test-00030-of-00057.parquet +3 -0
  8. 57_ieee/test-00031-of-00057.parquet +3 -0
  9. 57_ieee/test-00032-of-00057.parquet +3 -0
  10. 57_ieee/test-00033-of-00057.parquet +3 -0
  11. 57_ieee/test-00034-of-00057.parquet +3 -0
  12. 57_ieee/test-00035-of-00057.parquet +3 -0
  13. 57_ieee/test-00036-of-00057.parquet +3 -0
  14. 57_ieee/test-00037-of-00057.parquet +3 -0
  15. 57_ieee/test-00038-of-00057.parquet +3 -0
  16. 57_ieee/test-00039-of-00057.parquet +3 -0
  17. 57_ieee/test-00040-of-00057.parquet +3 -0
  18. 57_ieee/test-00041-of-00057.parquet +3 -0
  19. 57_ieee/test-00042-of-00057.parquet +3 -0
  20. 57_ieee/test-00043-of-00057.parquet +3 -0
  21. 57_ieee/test-00044-of-00057.parquet +3 -0
  22. 57_ieee/test-00045-of-00057.parquet +3 -0
  23. 57_ieee/test-00046-of-00057.parquet +3 -0
  24. 57_ieee/test-00047-of-00057.parquet +3 -0
  25. 57_ieee/test-00048-of-00057.parquet +3 -0
  26. 57_ieee/test-00049-of-00057.parquet +3 -0
  27. 57_ieee/test-00050-of-00057.parquet +3 -0
  28. 57_ieee/test-00051-of-00057.parquet +3 -0
  29. 57_ieee/test-00052-of-00057.parquet +3 -0
  30. 57_ieee/test-00053-of-00057.parquet +3 -0
  31. 57_ieee/test-00054-of-00057.parquet +3 -0
  32. 57_ieee/test-00055-of-00057.parquet +3 -0
  33. 57_ieee/test-00056-of-00057.parquet +3 -0
  34. PGLearn-Small-57_ieee.py +0 -397
  35. README.md +9 -1
  36. config.toml +0 -42
  37. infeasible/DCOPF/dual.h5.gz +0 -3
  38. infeasible/DCOPF/meta.h5.gz +0 -3
  39. infeasible/DCOPF/primal.h5.gz +0 -3
  40. infeasible/SOCOPF/dual.h5.gz +0 -3
  41. infeasible/SOCOPF/meta.h5.gz +0 -3
  42. infeasible/SOCOPF/primal.h5.gz +0 -3
  43. infeasible/input.h5.gz +0 -3
  44. test/ACOPF/dual.h5.gz +0 -3
  45. test/ACOPF/meta.h5.gz +0 -3
  46. test/ACOPF/primal.h5.gz +0 -3
  47. test/DCOPF/dual.h5.gz +0 -3
  48. test/DCOPF/meta.h5.gz +0 -3
  49. test/DCOPF/primal.h5.gz +0 -3
  50. test/SOCOPF/dual.h5.gz +0 -3
infeasible/ACOPF/meta.h5.gz → 57_ieee/test-00024-of-00057.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:27a94003515da14380fc3b43a7adad86208c73b7ce4b830db1625afd0638d0f2
3
- size 6727
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1ceff02410b84a68295ed6ed625cccd4420e256bc00c2adf62906d2b24985d1
3
+ size 85738115
case.json.gz → 57_ieee/test-00025-of-00057.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f80a2fd57a2640543c52ac39d62a4d8d236ab43f759f023d3770f1472aa42e7b
3
- size 45711
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c60f76d271337ba45f83ae9f126f34feb6d45bff1ac893b48cb9ae7a5a29bac5
3
+ size 85737721
infeasible/ACOPF/primal.h5.gz → 57_ieee/test-00026-of-00057.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14413fa91c281d44151d6f8db8a4adbd65f449b78f8e04f39c5c4e63f223c75c
3
- size 222187
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d3caa9d56a90d6ea6f6c59f4e0edfb605039f6fb4e5a9f2bfb67b45aec6fa6a
3
+ size 85744399
infeasible/ACOPF/dual.h5.gz → 57_ieee/test-00027-of-00057.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6269912e562525d8678838fe7e2ac250c0d2cbc94dc0507a6db0b0174c2d010b
3
- size 538865
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6b979cc0d3694d2a853d7d6a1410197bd690c4a91c80f7526be76a4248468d6
3
+ size 85757510
57_ieee/test-00028-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de80eba34121ea44bd3493209af2a52ffe7ba78c30f516175cf94be271a5f536
3
+ size 85756891
57_ieee/test-00029-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:637dbbccb35e66b93f8bb21d1f762ffecc8d07ec3ff581383e12f92e64585e53
3
+ size 85741133
57_ieee/test-00030-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b4c17499feb5850f6163c49516644069c7479a520dc5e8a9c39b55f21e242cb
3
+ size 85741227
57_ieee/test-00031-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fed8f6facdf744da4c41d763e6ff667b49da853d7211555256bcef1554732e65
3
+ size 85737798
57_ieee/test-00032-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e671ad47fa4bdd645a6f5bed3382bfd2522cfe0b7c1622f02cb3331f9f1d5381
3
+ size 85738561
57_ieee/test-00033-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f69cac3466835ea62d34d2be3bd5d1991f31e846e46802043a129b9aa338b000
3
+ size 85738693
57_ieee/test-00034-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8f39e27af167246de94ab7629ed7d2a14d24bcece65e1f258f78b74b1ff142c
3
+ size 85743550
57_ieee/test-00035-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f25dbd2ba35ef98f10edc5b4c3ff4fd53f6c0e040de37bcb92cf0f04e44c058
3
+ size 85743356
57_ieee/test-00036-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15ecbcf852eb9c9e7006a0e964f5bc70fc856703fecf3c927f315f39b9e2a3e7
3
+ size 85744814
57_ieee/test-00037-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d3bd6194a1ba0c11c21e474ce96366aa7b3b5a21182cab154c1827cecb30921
3
+ size 85745887
57_ieee/test-00038-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:351583325cc02251a605dcfafb58e91b9fec8b7af7377524910eac295081977a
3
+ size 85744414
57_ieee/test-00039-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c56b0013bb40863506e41d8921c85a9e13aea34f35b4a6e95ee7147d46786a66
3
+ size 85742310
57_ieee/test-00040-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:110db71d05adf341a782577d724d45778ac1348af3f66cb7281326dbc9213fb3
3
+ size 85750304
57_ieee/test-00041-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70afad4ad593474bed0d59674d24fa0503f462fffd7792b479ac25a3a6f0379d
3
+ size 85741266
57_ieee/test-00042-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a46c98d777e77f8ff559d4509c029f218e9e42ce5f446e8c6b44572466556a0d
3
+ size 85749507
57_ieee/test-00043-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4b3517d9841129fc4b510767b8aba2dc0790076392be7ed3d9b4c9f7562e602
3
+ size 85748901
57_ieee/test-00044-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48a75a82cf8a5278ef1d0cb6d38d51d28878dc72a57526bfe576273570fd3e49
3
+ size 85737604
57_ieee/test-00045-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0516b9c5b364f06a6c06c4ae04f0561e8348762813a5612231e93c41943e8432
3
+ size 85735367
57_ieee/test-00046-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86463a6f1d364cbf6bdab1ced3422a5bfa17a0f20ae52e2f1baf215b58b1f82b
3
+ size 85748676
57_ieee/test-00047-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd12418e5b3e9b75b0211fa3f4fff46b8c8475e3bd2b8bc9df02839c4cc0aa10
3
+ size 85747366
57_ieee/test-00048-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b91cba7bfb0b2256494ac599bcfb37219ee5f8309a3178b4fd2c6065ae2c2db
3
+ size 85747403
57_ieee/test-00049-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b558cd8703f1991023f2060d56343eefc13c459fd6414fb57b68480127b9f2c2
3
+ size 85737930
57_ieee/test-00050-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcc7952aaa336545c4ab2fa30d280a5297eedf1ded8b8390dcdfb78e62de0a55
3
+ size 85742336
57_ieee/test-00051-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf79ff7a48da55bf196607c2c0ecd72a68b2a3ea958adbf0f8883a7e48e022de
3
+ size 85736620
57_ieee/test-00052-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93d941763c5932d6b7b463acb3dc8199fcb0c96cf39001ec41e5f6d19c76c8e5
3
+ size 85741479
57_ieee/test-00053-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9b3ddcfcc7fd1f34d7a9f636077d12ae48ed70864cd01b76e068e0b267c21eb
3
+ size 85745150
57_ieee/test-00054-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f955c14d62120ec58319fe32c11878353a6abb84c088094c7dbd463434ea694
3
+ size 85735462
57_ieee/test-00055-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c4f11375f3cb98f06ca35e45990b016728a2b18df855ed8ae4b991747c0f315
3
+ size 85747472
57_ieee/test-00056-of-00057.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fcf651b951bdebfd5eacf9018b8d823909538e143d6117b6d2a59c8ed946516
3
+ size 85742779
PGLearn-Small-57_ieee.py DELETED
@@ -1,397 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import gzip
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pyarrow as pa
10
-
11
- # ┌──────────────┐
12
- # │ Metadata │
13
- # └──────────────┘
14
-
15
- @dataclass
16
- class CaseSizes:
17
- n_bus: int
18
- n_load: int
19
- n_gen: int
20
- n_branch: int
21
-
22
- CASENAME = "57_ieee"
23
- SIZES = CaseSizes(n_bus=57, n_load=42, n_gen=7, n_branch=80)
24
- NUM_TRAIN = 799888
25
- NUM_TEST = 199973
26
- NUM_INFEASIBLE = 139
27
-
28
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-Small-57_ieee"
29
- DESCRIPTION = """\
30
- The 57_ieee PGLearn optimal power flow dataset, part of the PGLearn-Small collection. \
31
- """
32
- VERSION = hfd.Version("1.0.0")
33
- DEFAULT_CONFIG_DESCRIPTION="""\
34
- This configuration contains feasible input, metadata, primal solution, and dual solution data \
35
- for the ACOPF, DCOPF, and SOCOPF formulations on the {case} system.
36
- """
37
- USE_ML4OPF_WARNING = """
38
- ================================================================================================
39
- Loading PGLearn-Small-57_ieee through the `datasets.load_dataset` function may be slow.
40
-
41
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
42
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
43
-
44
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
45
- https://huggingface.co/datasets/PGLearn/PGLearn-Small-57_ieee#downloading-individual-files
46
- ================================================================================================
47
- """
48
- CITATION = """\
49
- @article{klamkinpglearn,
50
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
51
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
52
- year={2025},
53
- }\
54
- """
55
-
56
- IS_COMPRESSED = True
57
-
58
- # ┌──────────────────┐
59
- # │ Formulations │
60
- # └──────────────────┘
61
-
62
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
63
- features = {}
64
- if primal: features.update(acopf_primal_features(sizes))
65
- if dual: features.update(acopf_dual_features(sizes))
66
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
67
- return features
68
-
69
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
70
- features = {}
71
- if primal: features.update(dcopf_primal_features(sizes))
72
- if dual: features.update(dcopf_dual_features(sizes))
73
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
74
- return features
75
-
76
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
77
- features = {}
78
- if primal: features.update(socopf_primal_features(sizes))
79
- if dual: features.update(socopf_dual_features(sizes))
80
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
81
- return features
82
-
83
- FORMULATIONS_TO_FEATURES = {
84
- "ACOPF": acopf_features,
85
- "DCOPF": dcopf_features,
86
- "SOCOPF": socopf_features,
87
- }
88
-
89
- # ┌───────────────────┐
90
- # │ BuilderConfig │
91
- # └───────────────────┘
92
-
93
- class PGLearnSmall57_ieeeConfig(hfd.BuilderConfig):
94
- """BuilderConfig for PGLearn-Small-57_ieee.
95
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
96
-
97
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
98
-
99
- Attributes:
100
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
101
- primal (bool, optional): Include primal solution data. Defaults to True.
102
- dual (bool, optional): Include dual solution data. Defaults to False.
103
- meta (bool, optional): Include metadata. Defaults to True.
104
- input (bool, optional): Include input data. Defaults to True.
105
- casejson (bool, optional): Include case.json data. Defaults to True.
106
- train (bool, optional): Include training samples. Defaults to True.
107
- test (bool, optional): Include testing samples. Defaults to True.
108
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
109
- """
110
- def __init__(self,
111
- formulations: list[str],
112
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
113
- train: bool=True, test: bool=True, infeasible: bool=False,
114
- compressed: bool=IS_COMPRESSED, **kwargs
115
- ):
116
- super(PGLearnSmall57_ieeeConfig, self).__init__(version=VERSION, **kwargs)
117
-
118
- self.case = CASENAME
119
- self.formulations = formulations
120
-
121
- self.primal = primal
122
- self.dual = dual
123
- self.meta = meta
124
- self.input = input
125
- self.casejson = casejson
126
-
127
- self.train = train
128
- self.test = test
129
- self.infeasible = infeasible
130
-
131
- self.gz_ext = ".gz" if compressed else ""
132
-
133
- @property
134
- def size(self):
135
- return SIZES
136
-
137
- @property
138
- def features(self):
139
- features = {}
140
- if self.casejson: features.update(case_features())
141
- if self.input: features.update(input_features(SIZES))
142
- for formulation in self.formulations:
143
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
144
- return hfd.Features(features)
145
-
146
- @property
147
- def splits(self):
148
- splits: dict[hfd.Split, dict[str, str | int]] = {}
149
- if self.train:
150
- splits[hfd.Split.TRAIN] = {
151
- "name": "train",
152
- "num_examples": NUM_TRAIN
153
- }
154
- if self.test:
155
- splits[hfd.Split.TEST] = {
156
- "name": "test",
157
- "num_examples": NUM_TEST
158
- }
159
- if self.infeasible:
160
- splits[hfd.Split("infeasible")] = {
161
- "name": "infeasible",
162
- "num_examples": NUM_INFEASIBLE
163
- }
164
- return splits
165
-
166
- @property
167
- def urls(self):
168
- urls: dict[str, None | str | list] = {
169
- "case": None, "train": [], "test": [], "infeasible": [],
170
- }
171
-
172
- if self.casejson: urls["case"] = f"case.json" + self.gz_ext
173
-
174
- split_names = []
175
- if self.train: split_names.append("train")
176
- if self.test: split_names.append("test")
177
- if self.infeasible: split_names.append("infeasible")
178
-
179
- for split in split_names:
180
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
181
- for formulation in self.formulations:
182
- if self.primal: urls[split].append(f"{split}/{formulation}/primal.h5" + self.gz_ext)
183
- if self.dual: urls[split].append(f"{split}/{formulation}/dual.h5" + self.gz_ext)
184
- if self.meta: urls[split].append(f"{split}/{formulation}/meta.h5" + self.gz_ext)
185
- return urls
186
-
187
- # ┌────────────────────┐
188
- # │ DatasetBuilder │
189
- # └────────────────────┘
190
-
191
- class PGLearnSmall57_ieee(hfd.ArrowBasedBuilder):
192
- """DatasetBuilder for PGLearn-Small-57_ieee.
193
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
194
-
195
- ```python
196
- from datasets import load_dataset
197
- ds = load_dataset("PGLearn/PGLearn-Small-57_ieee", trust_remote_code=True,
198
- # modify the default configuration by passing kwargs
199
- formulations=["DCOPF"],
200
- dual=False,
201
- meta=False,
202
- )
203
- ```
204
- """
205
-
206
- DEFAULT_WRITER_BATCH_SIZE = 10000
207
- BUILDER_CONFIG_CLASS = PGLearnSmall57_ieeeConfig
208
- DEFAULT_CONFIG_NAME=CASENAME
209
- BUILDER_CONFIGS = [
210
- PGLearnSmall57_ieeeConfig(
211
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
212
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
213
- primal=True, dual=True, meta=True, input=True, casejson=True,
214
- train=True, test=True, infeasible=False,
215
- )
216
- ]
217
-
218
- def _info(self):
219
- return hfd.DatasetInfo(
220
- features=self.config.features, splits=self.config.splits,
221
- description=DESCRIPTION + self.config.description,
222
- homepage=URL, citation=CITATION,
223
- )
224
-
225
- def _split_generators(self, dl_manager: hfd.DownloadManager):
226
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
227
-
228
- filepaths = dl_manager.download_and_extract(self.config.urls)
229
-
230
- splits: list[hfd.SplitGenerator] = []
231
- if self.config.train:
232
- splits.append(hfd.SplitGenerator(
233
- name=hfd.Split.TRAIN,
234
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
235
- ))
236
- if self.config.test:
237
- splits.append(hfd.SplitGenerator(
238
- name=hfd.Split.TEST,
239
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
240
- ))
241
- if self.config.infeasible:
242
- splits.append(hfd.SplitGenerator(
243
- name=hfd.Split("infeasible"),
244
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
245
- ))
246
- return splits
247
-
248
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str], n_samples: int):
249
- case_data: str | None = json.dumps(json.load(open_maybe_gzip(case_file))) if case_file is not None else None
250
-
251
- opened_files = [open_maybe_gzip(file) for file in data_files]
252
- data = {'/'.join(Path(df.get_origin()).parts[-2:]).split('.')[0]: h5py.File(of) for of, df in zip(opened_files, data_files)}
253
- for k in list(data.keys()):
254
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
255
-
256
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
257
- for i in range(0, n_samples, batch_size):
258
- effective_batch_size = min(batch_size, n_samples - i)
259
-
260
- sample_data = {
261
- f"{dk}/{k}":
262
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
263
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
264
- }
265
-
266
- if case_data is not None:
267
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
268
-
269
- yield i, pa.Table.from_pydict(sample_data)
270
-
271
- for f in opened_files:
272
- f.close()
273
-
274
- # ┌──────────────┐
275
- # │ Features │
276
- # └──────────────┘
277
-
278
- FLOAT_TYPE = "float32"
279
- INT_TYPE = "int64"
280
- BOOL_TYPE = "bool"
281
- STRING_TYPE = "string"
282
-
283
- def case_features():
284
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
285
- return {
286
- "case/json": hfd.Value(STRING_TYPE),
287
- }
288
-
289
- META_FEATURES = {
290
- "meta/seed": hfd.Value(dtype=INT_TYPE),
291
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
292
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
293
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
294
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
295
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
296
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
297
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
298
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
299
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
300
- }
301
-
302
- def input_features(sizes: CaseSizes):
303
- return {
304
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
305
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
306
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
307
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
308
- "input/seed": hfd.Value(dtype=INT_TYPE),
309
- }
310
-
311
- def acopf_primal_features(sizes: CaseSizes):
312
- return {
313
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
314
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
315
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
316
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
317
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
318
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
319
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
320
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
321
- }
322
- def acopf_dual_features(sizes: CaseSizes):
323
- return {
324
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
325
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
326
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
327
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
328
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
329
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
330
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
331
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
332
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
333
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
334
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
335
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
336
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
341
- }
342
- def dcopf_primal_features(sizes: CaseSizes):
343
- return {
344
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
345
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
346
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
347
- }
348
- def dcopf_dual_features(sizes: CaseSizes):
349
- return {
350
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
356
- }
357
- def socopf_primal_features(sizes: CaseSizes):
358
- return {
359
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
362
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
363
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
364
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
365
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
366
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
367
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- }
369
- def socopf_dual_features(sizes: CaseSizes):
370
- return {
371
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
372
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
373
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
377
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
378
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
379
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
380
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
381
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
382
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
383
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
390
- }
391
-
392
- # ┌───────────────┐
393
- # │ Utilities │
394
- # └───────────────┘
395
-
396
- def open_maybe_gzip(path):
397
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -290,6 +290,14 @@ dataset_info:
290
  - name: test
291
  num_bytes: 28126577400
292
  num_examples: 199973
293
- download_size: 14799215155
294
  dataset_size: 140632324390
 
 
 
 
 
 
 
 
295
  ---
 
290
  - name: test
291
  num_bytes: 28126577400
292
  num_examples: 199973
293
+ download_size: 24439328183
294
  dataset_size: 140632324390
295
+ configs:
296
+ - config_name: 57_ieee
297
+ data_files:
298
+ - split: train
299
+ path: 57_ieee/train-*
300
+ - split: test
301
+ path: 57_ieee/test-*
302
+ default: true
303
  ---
config.toml DELETED
@@ -1,42 +0,0 @@
1
- # Name of the reference PGLib case. Must be a valid PGLib case name.
2
- pglib_case = "pglib_opf_case57_ieee"
3
- floating_point_type = "Float32"
4
-
5
- [sampler]
6
- # data sampler options
7
- [sampler.load]
8
- noise_type = "ScaledUniform"
9
- l = 0.6 # Lower bound of base load factor
10
- u = 1.0 # Upper bound of base load factor
11
- sigma = 0.20 # Relative (multiplicative) noise level.
12
-
13
-
14
- [OPF]
15
-
16
- [OPF.ACOPF]
17
- type = "ACOPF"
18
- solver.name = "Ipopt"
19
- solver.attributes.tol = 1e-6
20
- solver.attributes.linear_solver = "ma27"
21
-
22
- [OPF.DCOPF]
23
- # Formulation/solver options
24
- type = "DCOPF"
25
- solver.name = "HiGHS"
26
-
27
- [OPF.SOCOPF]
28
- type = "SOCOPF"
29
- solver.name = "Clarabel"
30
- # Tight tolerances
31
- solver.attributes.tol_gap_abs = 1e-6
32
- solver.attributes.tol_gap_rel = 1e-6
33
- solver.attributes.tol_feas = 1e-6
34
- solver.attributes.tol_infeas_rel = 1e-6
35
- solver.attributes.tol_ktratio = 1e-6
36
- # Reduced accuracy settings
37
- solver.attributes.reduced_tol_gap_abs = 1e-6
38
- solver.attributes.reduced_tol_gap_rel = 1e-6
39
- solver.attributes.reduced_tol_feas = 1e-6
40
- solver.attributes.reduced_tol_infeas_abs = 1e-6
41
- solver.attributes.reduced_tol_infeas_rel = 1e-6
42
- solver.attributes.reduced_tol_ktratio = 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
infeasible/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:31ea3e008c2b29308a112d9f218caf5d4ca0a534ae7799220abd9ce6c65cda92
3
- size 4070
 
 
 
 
infeasible/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9322f4cd466e42534e1a9830bc431e9fd15819e051949be8ba44cb9f27e8f98f
3
- size 5965
 
 
 
 
infeasible/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c5e8225691c73e302f721451b6117a4a6d20511baa0ab63fa3dce9246353adc
3
- size 69969
 
 
 
 
infeasible/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e7b38c583c018554e6c12cae630bcf866c50924c68c220f5ab3e21d7531e3b43
3
- size 935676
 
 
 
 
infeasible/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6da1fee912ba3839a67079ed017dd84f6a28759cd74a90533ebfba3bd022961
3
- size 6630
 
 
 
 
infeasible/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d4e8ab2ea443de575a1c1a15f27dd48eccd21ce6bfe14310ab958b48438bb104
3
- size 274549
 
 
 
 
infeasible/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cbdca03e3a4bc7a0fa2e2ce40a69d60bf65097bd39cd077b2a31c93d2b538dbd
3
- size 43769
 
 
 
 
test/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:474bdd9321e1b504045835d0ef9c72c4251c67a96d17e53d9e338696d8ce7082
3
- size 725154412
 
 
 
 
test/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b7e19fcb83117847817fe928df10136fc7eb05cad9da273374db4f83c565d0eb
3
- size 6677088
 
 
 
 
test/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b4067531685667a03ee0f52390c97d4a2c8a68b8028bd6c1e4bf9b24d8ec840
3
- size 319029219
 
 
 
 
test/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b97c3f16e9669a77adbca532385513f3eb71e97a60f6ee6cf4072881efcaee3
3
- size 1456126
 
 
 
 
test/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1ceeecf87e0c0d30f0818dffe3013f8378afe070936cd83f5e4703d02ddc8c5
3
- size 6499391
 
 
 
 
test/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1095d19fe47feb4b52fbc35e9c943c1744afce226a0555088d2ac58e39215d24
3
- size 99521518
 
 
 
 
test/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0e8d2c5e0bd5f5b4fe47653167b54c265bb9c1313287682d13ecee1ef220320
3
- size 1340764453