Commit
·
2be71f4
verified
·
0
Parent(s):
Super-squash history to reclaim storage
Browse files- .gitattributes +77 -0
- Cosmos-Reason1-7B-bf16.gguf +3 -0
- Cosmos-Reason1-7B-bf16_q8_0.gguf +3 -0
- Cosmos-Reason1-7B-f16_q8_0.gguf +3 -0
- Cosmos-Reason1-7B-iq2_m.gguf +3 -0
- Cosmos-Reason1-7B-iq2_s.gguf +3 -0
- Cosmos-Reason1-7B-iq2_xs.gguf +3 -0
- Cosmos-Reason1-7B-iq2_xxs.gguf +3 -0
- Cosmos-Reason1-7B-iq3_m.gguf +3 -0
- Cosmos-Reason1-7B-iq3_s.gguf +3 -0
- Cosmos-Reason1-7B-iq3_xs.gguf +3 -0
- Cosmos-Reason1-7B-iq3_xxs.gguf +3 -0
- Cosmos-Reason1-7B-iq4_nl.gguf +3 -0
- Cosmos-Reason1-7B-iq4_xs.gguf +3 -0
- Cosmos-Reason1-7B-q2_k_m.gguf +3 -0
- Cosmos-Reason1-7B-q2_k_s.gguf +3 -0
- Cosmos-Reason1-7B-q3_k_m.gguf +3 -0
- Cosmos-Reason1-7B-q3_k_s.gguf +3 -0
- Cosmos-Reason1-7B-q4_0.gguf +3 -0
- Cosmos-Reason1-7B-q4_1.gguf +3 -0
- Cosmos-Reason1-7B-q4_k_m.gguf +3 -0
- Cosmos-Reason1-7B-q4_k_s.gguf +3 -0
- Cosmos-Reason1-7B-q5_0.gguf +3 -0
- Cosmos-Reason1-7B-q5_1.gguf +3 -0
- Cosmos-Reason1-7B-q5_k_m.gguf +3 -0
- Cosmos-Reason1-7B-q5_k_s.gguf +3 -0
- Cosmos-Reason1-7B-q6_k_m.gguf +3 -0
- Cosmos-Reason1-7B-q8_0.gguf +3 -0
- Cosmos-Reason1-7B.imatrix +3 -0
- README.md +501 -0
.gitattributes
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Cosmos-Reason1-7B-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Cosmos-Reason1-7B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Cosmos-Reason1-7B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Cosmos-Reason1-7B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Cosmos-Reason1-7B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Cosmos-Reason1-7B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
Cosmos-Reason1-7B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
Cosmos-Reason1-7B-q2_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
Cosmos-Reason1-7B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
Cosmos-Reason1-7B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
Cosmos-Reason1-7B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
Cosmos-Reason1-7B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
Cosmos-Reason1-7B-q2_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
Cosmos-Reason1-7B-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
Cosmos-Reason1-7B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
Cosmos-Reason1-7B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
Cosmos-Reason1-7B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
Cosmos-Reason1-7B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
Cosmos-Reason1-7B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
Cosmos-Reason1-7B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
Cosmos-Reason1-7B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
Cosmos-Reason1-7B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 58 |
+
Cosmos-Reason1-7B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
Cosmos-Reason1-7B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
Cosmos-Reason1-7B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
Cosmos-Reason1-7B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 62 |
+
Cosmos-Reason1-7B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 63 |
+
Cosmos-Reason1-7B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
|
| 64 |
+
Cosmos-Reason1-7B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 65 |
+
Cosmos-Reason1-7B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 66 |
+
Cosmos-Reason1-7B-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 67 |
+
Cosmos-Reason1-7B-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 68 |
+
Cosmos-Reason1-7B-iq2_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 69 |
+
Cosmos-Reason1-7B-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 70 |
+
Cosmos-Reason1-7B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 71 |
+
Cosmos-Reason1-7B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 72 |
+
Cosmos-Reason1-7B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 73 |
+
Cosmos-Reason1-7B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 74 |
+
Cosmos-Reason1-7B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 75 |
+
Cosmos-Reason1-7B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
|
| 76 |
+
Cosmos-Reason1-7B.imatrix filter=lfs diff=lfs merge=lfs -text
|
| 77 |
+
Cosmos-Reason1-7B-bf16.gguf filter=lfs diff=lfs merge=lfs -text
|
Cosmos-Reason1-7B-bf16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a978add4431b522e82994d22b9dd8b5251432068ae1c18ae9034e05a7553dd08
|
| 3 |
+
size 15237853184
|
Cosmos-Reason1-7B-bf16_q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:afec0b7c9df89caea30da6659015cc13554dec1bd87d59e8204e718360fe1fab
|
| 3 |
+
size 9120395264
|
Cosmos-Reason1-7B-f16_q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fd44f17a6ca8dfa0d06421bbc72e93a9fbae9e135c4a05fd73370aa002b0b8cc
|
| 3 |
+
size 11287998464
|
Cosmos-Reason1-7B-iq2_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:20fe0df1d3dace804af82afda4535e3ff112690913201f63dfa3bd610b94d311
|
| 3 |
+
size 3039121696
|
Cosmos-Reason1-7B-iq2_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a37bc1da50f14d57caa5163bf15b71008c18cf4d36dbc041cd12a192151b0b45
|
| 3 |
+
size 2912964896
|
Cosmos-Reason1-7B-iq2_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:137d8118726a2e5142f446632629f3a90fe41df33d29673c2c6d4eaa198275be
|
| 3 |
+
size 2839335200
|
Cosmos-Reason1-7B-iq2_xxs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b76e30ed7abf72e93eb482e4b162196ae52629b81b2a02be662ddc65953a485f
|
| 3 |
+
size 2650902816
|
Cosmos-Reason1-7B-iq3_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a02091954c0e1e0b0a233752f1e9633794d9cf29ec6fe229a5f92d26526b668d
|
| 3 |
+
size 3603386656
|
Cosmos-Reason1-7B-iq3_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ed140526b03d3a26f426be0da13ec54240e09228f57773997fbe34a49cde3a1b
|
| 3 |
+
size 3565855008
|
Cosmos-Reason1-7B-iq3_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:177d2e6093d0b5e4f704d87fde60a3d9d6b5594ee14d774a613b4b5098ff3ef7
|
| 3 |
+
size 3412918560
|
Cosmos-Reason1-7B-iq3_xxs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec569fe8a3f1ffb0775841334ce38edaf50fda4229ab5c20113de7c49ca1e2a2
|
| 3 |
+
size 3272655136
|
Cosmos-Reason1-7B-iq4_nl.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f20805b6c4e74f33a240a303644e997b1ab9958b4950033001e58700600b5e7
|
| 3 |
+
size 4437813536
|
Cosmos-Reason1-7B-iq4_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af64dbe9ff7c40871f71b050aa897619141823a3393aa47ecdd4d7e693c61104
|
| 3 |
+
size 4218472736
|
Cosmos-Reason1-7B-q2_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1165c5b664d9ed5f0df084b62a971c460789c4fcb8e604f9d18627eaea2baec0
|
| 3 |
+
size 3259494688
|
Cosmos-Reason1-7B-q2_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f5b24c34a23f7cdc3087c06372517b7dd9a2e209e0c2c5535886beeb66069c12
|
| 3 |
+
size 2934325536
|
Cosmos-Reason1-7B-q3_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:63d797c67263adf73cd69198aacca5548133e803d5724888c30da5696795d38c
|
| 3 |
+
size 3996852512
|
Cosmos-Reason1-7B-q3_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0033b9e2ab3a1a55e6b4f6aecf8ac946aa50044b9fbb9e2870ed219c974197ff
|
| 3 |
+
size 3610812704
|
Cosmos-Reason1-7B-q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f7adf55bcb9deffc7e84adda08885879427af94c97cfc10ec28c6f38f958dbb
|
| 3 |
+
size 4290883872
|
Cosmos-Reason1-7B-q4_1.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8340d2cc1ceba0e96ca43187e44ebfad4216b2600e23cd50f9b6903fead376eb
|
| 3 |
+
size 4766839072
|
Cosmos-Reason1-7B-q4_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f041df0a38e776d7ecfa63eff7745321147136c765d69ba5c41c5c578472396a
|
| 3 |
+
size 4777648416
|
Cosmos-Reason1-7B-q4_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b50b28adbc9c568c3564510363606064749457b3f4db71fe58598e50e20ac807
|
| 3 |
+
size 4634059040
|
Cosmos-Reason1-7B-q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f692da23a42c8bc853ed105db7089bfec3a48974d876a3aa9a7b6effe8c68fad
|
| 3 |
+
size 5242794272
|
Cosmos-Reason1-7B-q5_1.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:89df3d31bab584b030c7c930233725d7f02567d133733349cd8b6cf1b2441055
|
| 3 |
+
size 5718749472
|
Cosmos-Reason1-7B-q5_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b5da864fcba23ec3d824bbffbff7042b1686f629c8d45e41e84bdeff462eb04
|
| 3 |
+
size 5527449888
|
Cosmos-Reason1-7B-q5_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0250e9c5f24f3b62670c380ce5b6e0eceec768ec6beb8e1c6b7612b869796f4
|
| 3 |
+
size 5453361440
|
Cosmos-Reason1-7B-q6_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ea0915b19f27ed5c564b636fd5479e3d5f1df2f2061a07ebd1742d3ba98696b
|
| 3 |
+
size 6254199072
|
Cosmos-Reason1-7B-q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2964c9914dcb95d58c6fa0c6bb63c03f9045f0763bc031f2d56d392e69fe1c83
|
| 3 |
+
size 8098525184
|
Cosmos-Reason1-7B.imatrix
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7cbda06616567f96ceddbc42f8275a81b658f5958a50830c7d444d0b88a167f1
|
| 3 |
+
size 4536712
|
README.md
ADDED
|
@@ -0,0 +1,501 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: nvidia-open-model-license
|
| 4 |
+
license_link: >-
|
| 5 |
+
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
|
| 6 |
+
datasets:
|
| 7 |
+
- nvidia/Cosmos-Reason1-SFT-Dataset
|
| 8 |
+
- nvidia/Cosmos-Reason1-RL-Dataset
|
| 9 |
+
- nvidia/Cosmos-Reason1-Benchmark
|
| 10 |
+
library_name: transformers
|
| 11 |
+
language:
|
| 12 |
+
- en
|
| 13 |
+
base_model:
|
| 14 |
+
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 15 |
+
tags:
|
| 16 |
+
- nvidia
|
| 17 |
+
- cosmos
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# <span style="color: #7FFF7F;">Cosmos-Reason1-7B GGUF Models</span>
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
| 24 |
+
|
| 25 |
+
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`92ecdcc0`](https://github.com/ggerganov/llama.cpp/commit/92ecdcc06a4c405a415bcaa0cb772bc560aa23b1).
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
|
| 31 |
+
|
| 32 |
+
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
|
| 33 |
+
|
| 34 |
+
### **Benchmark Context**
|
| 35 |
+
All tests conducted on **Llama-3-8B-Instruct** using:
|
| 36 |
+
- Standard perplexity evaluation pipeline
|
| 37 |
+
- 2048-token context window
|
| 38 |
+
- Same prompt set across all quantizations
|
| 39 |
+
|
| 40 |
+
### **Method**
|
| 41 |
+
- **Dynamic Precision Allocation**:
|
| 42 |
+
- First/Last 25% of layers → IQ4_XS (selected layers)
|
| 43 |
+
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
|
| 44 |
+
- **Critical Component Protection**:
|
| 45 |
+
- Embeddings/output layers use Q5_K
|
| 46 |
+
- Reduces error propagation by 38% vs standard 1-2bit
|
| 47 |
+
|
| 48 |
+
### **Quantization Performance Comparison (Llama-3-8B)**
|
| 49 |
+
|
| 50 |
+
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|
| 51 |
+
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
|
| 52 |
+
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
|
| 53 |
+
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
|
| 54 |
+
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
|
| 55 |
+
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
|
| 56 |
+
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
|
| 57 |
+
|
| 58 |
+
**Key**:
|
| 59 |
+
- PPL = Perplexity (lower is better)
|
| 60 |
+
- Δ PPL = Percentage change from standard to DynamicGate
|
| 61 |
+
- Speed = Inference time (CPU avx2, 2048 token context)
|
| 62 |
+
- Size differences reflect mixed quantization overhead
|
| 63 |
+
|
| 64 |
+
**Key Improvements:**
|
| 65 |
+
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
|
| 66 |
+
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
|
| 67 |
+
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
|
| 68 |
+
|
| 69 |
+
**Tradeoffs:**
|
| 70 |
+
- All variants have modest size increases (0.1-0.3GB)
|
| 71 |
+
- Inference speeds remain comparable (<5% difference)
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
### **When to Use These Models**
|
| 75 |
+
📌 **Fitting models into GPU VRAM**
|
| 76 |
+
|
| 77 |
+
✔ **Memory-constrained deployments**
|
| 78 |
+
|
| 79 |
+
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
|
| 80 |
+
|
| 81 |
+
✔ **Research** into ultra-low-bit quantization
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
## **Choosing the Right Model Format**
|
| 86 |
+
|
| 87 |
+
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
|
| 88 |
+
|
| 89 |
+
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
|
| 90 |
+
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
|
| 91 |
+
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
|
| 92 |
+
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
|
| 93 |
+
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
|
| 94 |
+
|
| 95 |
+
📌 **Use BF16 if:**
|
| 96 |
+
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
|
| 97 |
+
✔ You want **higher precision** while saving memory.
|
| 98 |
+
✔ You plan to **requantize** the model into another format.
|
| 99 |
+
|
| 100 |
+
📌 **Avoid BF16 if:**
|
| 101 |
+
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
|
| 102 |
+
❌ You need compatibility with older devices that lack BF16 optimization.
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
### **F16 (Float 16) – More widely supported than BF16**
|
| 107 |
+
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
|
| 108 |
+
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
|
| 109 |
+
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
|
| 110 |
+
|
| 111 |
+
📌 **Use F16 if:**
|
| 112 |
+
✔ Your hardware supports **FP16** but **not BF16**.
|
| 113 |
+
✔ You need a **balance between speed, memory usage, and accuracy**.
|
| 114 |
+
✔ You are running on a **GPU** or another device optimized for FP16 computations.
|
| 115 |
+
|
| 116 |
+
📌 **Avoid F16 if:**
|
| 117 |
+
❌ Your device lacks **native FP16 support** (it may run slower than expected).
|
| 118 |
+
❌ You have memory limitations.
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
|
| 123 |
+
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
|
| 124 |
+
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
|
| 125 |
+
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
|
| 126 |
+
|
| 127 |
+
📌 **Use Quantized Models if:**
|
| 128 |
+
✔ You are running inference on a **CPU** and need an optimized model.
|
| 129 |
+
✔ Your device has **low VRAM** and cannot load full-precision models.
|
| 130 |
+
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
|
| 131 |
+
|
| 132 |
+
📌 **Avoid Quantized Models if:**
|
| 133 |
+
❌ You need **maximum accuracy** (full-precision models are better for this).
|
| 134 |
+
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
|
| 135 |
+
|
| 136 |
+
---
|
| 137 |
+
|
| 138 |
+
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
|
| 139 |
+
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
|
| 140 |
+
|
| 141 |
+
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
|
| 142 |
+
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
|
| 143 |
+
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
|
| 144 |
+
|
| 145 |
+
- **IQ3_S**: Small block size for **maximum memory efficiency**.
|
| 146 |
+
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
|
| 147 |
+
|
| 148 |
+
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
|
| 149 |
+
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
|
| 150 |
+
|
| 151 |
+
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
|
| 152 |
+
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
|
| 153 |
+
|
| 154 |
+
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
|
| 155 |
+
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
### **Summary Table: Model Format Selection**
|
| 160 |
+
|
| 161 |
+
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|
| 162 |
+
|--------------|------------|---------------|----------------------|---------------|
|
| 163 |
+
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
|
| 164 |
+
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
|
| 165 |
+
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
|
| 166 |
+
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
|
| 167 |
+
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
|
| 168 |
+
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
|
| 169 |
+
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## **Included Files & Details**
|
| 174 |
+
|
| 175 |
+
### `Cosmos-Reason1-7B-bf16.gguf`
|
| 176 |
+
- Model weights preserved in **BF16**.
|
| 177 |
+
- Use this if you want to **requantize** the model into a different format.
|
| 178 |
+
- Best if your device supports **BF16 acceleration**.
|
| 179 |
+
|
| 180 |
+
### `Cosmos-Reason1-7B-f16.gguf`
|
| 181 |
+
- Model weights stored in **F16**.
|
| 182 |
+
- Use if your device supports **FP16**, especially if BF16 is not available.
|
| 183 |
+
|
| 184 |
+
### `Cosmos-Reason1-7B-bf16-q8_0.gguf`
|
| 185 |
+
- **Output & embeddings** remain in **BF16**.
|
| 186 |
+
- All other layers quantized to **Q8_0**.
|
| 187 |
+
- Use if your device supports **BF16** and you want a quantized version.
|
| 188 |
+
|
| 189 |
+
### `Cosmos-Reason1-7B-f16-q8_0.gguf`
|
| 190 |
+
- **Output & embeddings** remain in **F16**.
|
| 191 |
+
- All other layers quantized to **Q8_0**.
|
| 192 |
+
|
| 193 |
+
### `Cosmos-Reason1-7B-q4_k.gguf`
|
| 194 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
| 195 |
+
- All other layers quantized to **Q4_K**.
|
| 196 |
+
- Good for **CPU inference** with limited memory.
|
| 197 |
+
|
| 198 |
+
### `Cosmos-Reason1-7B-q4_k_s.gguf`
|
| 199 |
+
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
|
| 200 |
+
- Best for **very low-memory setups**.
|
| 201 |
+
|
| 202 |
+
### `Cosmos-Reason1-7B-q6_k.gguf`
|
| 203 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
| 204 |
+
- All other layers quantized to **Q6_K** .
|
| 205 |
+
|
| 206 |
+
### `Cosmos-Reason1-7B-q8_0.gguf`
|
| 207 |
+
- Fully **Q8** quantized model for better accuracy.
|
| 208 |
+
- Requires **more memory** but offers higher precision.
|
| 209 |
+
|
| 210 |
+
### `Cosmos-Reason1-7B-iq3_xs.gguf`
|
| 211 |
+
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
|
| 212 |
+
- Best for **ultra-low-memory devices**.
|
| 213 |
+
|
| 214 |
+
### `Cosmos-Reason1-7B-iq3_m.gguf`
|
| 215 |
+
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
|
| 216 |
+
- Suitable for **low-memory devices**.
|
| 217 |
+
|
| 218 |
+
### `Cosmos-Reason1-7B-q4_0.gguf`
|
| 219 |
+
- Pure **Q4_0** quantization, optimized for **ARM devices**.
|
| 220 |
+
- Best for **low-memory environments**.
|
| 221 |
+
- Prefer IQ4_NL for better accuracy.
|
| 222 |
+
|
| 223 |
+
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
|
| 224 |
+
�� **Please click "Like" if you find this useful!**
|
| 225 |
+
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
|
| 226 |
+
👉 [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
|
| 227 |
+
|
| 228 |
+
💬 **How to test**:
|
| 229 |
+
Choose an **AI assistant type**:
|
| 230 |
+
- `TurboLLM` (GPT-4o-mini)
|
| 231 |
+
- `HugLLM` (Hugginface Open-source)
|
| 232 |
+
- `TestLLM` (Experimental CPU-only)
|
| 233 |
+
|
| 234 |
+
### **What I’m Testing**
|
| 235 |
+
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
|
| 236 |
+
- **Function calling** against live network services
|
| 237 |
+
- **How small can a model go** while still handling:
|
| 238 |
+
- Automated **Nmap scans**
|
| 239 |
+
- **Quantum-readiness checks**
|
| 240 |
+
- **Network Monitoring tasks**
|
| 241 |
+
|
| 242 |
+
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
|
| 243 |
+
- ✅ **Zero-configuration setup**
|
| 244 |
+
- ⏳ 30s load time (slow inference but **no API costs**)
|
| 245 |
+
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
|
| 246 |
+
|
| 247 |
+
### **Other Assistants**
|
| 248 |
+
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
|
| 249 |
+
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
|
| 250 |
+
- **Real-time network diagnostics and monitoring**
|
| 251 |
+
- **Security Audits**
|
| 252 |
+
- **Penetration testing** (Nmap/Metasploit)
|
| 253 |
+
|
| 254 |
+
|
| 255 |
+
🔵 **HugLLM** – Latest Open-source models:
|
| 256 |
+
- 🌐 Runs on Hugging Face Inference API
|
| 257 |
+
|
| 258 |
+
### 💡 **Example commands to you could test**:
|
| 259 |
+
1. `"Give me info on my websites SSL certificate"`
|
| 260 |
+
2. `"Check if my server is using quantum safe encyption for communication"`
|
| 261 |
+
3. `"Run a comprehensive security audit on my server"`
|
| 262 |
+
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
|
| 263 |
+
|
| 264 |
+
### Final Word
|
| 265 |
+
|
| 266 |
+
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
|
| 267 |
+
|
| 268 |
+
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
|
| 269 |
+
|
| 270 |
+
I'm also open to job opportunities or sponsorship.
|
| 271 |
+
|
| 272 |
+
Thank you! 😊
|
| 273 |
+
|
| 274 |
+
|
| 275 |
+
|
| 276 |
+
|
| 277 |
+
|
| 278 |
+
# **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
|
| 279 |
+
|
| 280 |
+
[**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
|
| 281 |
+
|
| 282 |
+
# Model Overview
|
| 283 |
+
|
| 284 |
+
## Description:
|
| 285 |
+
|
| 286 |
+
**Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
|
| 287 |
+
|
| 288 |
+
The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
|
| 289 |
+
|
| 290 |
+
The models are ready for commercial use.
|
| 291 |
+
|
| 292 |
+
**Model Developer**: NVIDIA
|
| 293 |
+
|
| 294 |
+
## Model Versions
|
| 295 |
+
|
| 296 |
+
The Cosmos-Reason1 includes the following model:
|
| 297 |
+
|
| 298 |
+
- [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
|
| 299 |
+
|
| 300 |
+
### License:
|
| 301 |
+
|
| 302 |
+
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [[email protected]](mailto:[email protected]).
|
| 303 |
+
|
| 304 |
+
Under the NVIDIA Open Model License, NVIDIA confirms:
|
| 305 |
+
|
| 306 |
+
* Models are commercially usable.
|
| 307 |
+
* You are free to create and distribute Derivative Models.
|
| 308 |
+
* NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
|
| 309 |
+
|
| 310 |
+
**Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
|
| 311 |
+
|
| 312 |
+
### Deployment Geography:
|
| 313 |
+
|
| 314 |
+
Global
|
| 315 |
+
|
| 316 |
+
### Use Case:
|
| 317 |
+
|
| 318 |
+
Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
|
| 319 |
+
|
| 320 |
+
### Release Date:
|
| 321 |
+
|
| 322 |
+
* Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
|
| 323 |
+
* Huggingface: [05/17/2025](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7)
|
| 324 |
+
|
| 325 |
+
## Model Architecture:
|
| 326 |
+
|
| 327 |
+
Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
|
| 328 |
+
Network Architecture: Qwen2.5-VL-7B-Instruct.
|
| 329 |
+
|
| 330 |
+
Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
|
| 331 |
+
|
| 332 |
+
|
| 333 |
+
## Input
|
| 334 |
+
|
| 335 |
+
**Input Type(s)**: Text+Video/Image
|
| 336 |
+
|
| 337 |
+
**Input Format(s)**:
|
| 338 |
+
* Text: String
|
| 339 |
+
* Video: mp4
|
| 340 |
+
* Image: jpg
|
| 341 |
+
|
| 342 |
+
**Input Parameters**:
|
| 343 |
+
* Text: One-dimensional (1D)
|
| 344 |
+
* Video: Three-dimensional (3D)
|
| 345 |
+
* Image: Two-dimensional (2D)
|
| 346 |
+
|
| 347 |
+
**Other Properties Related to Input**:
|
| 348 |
+
* Use `FPS=4` for input video to match the training setup.
|
| 349 |
+
* Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
|
| 350 |
+
|
| 351 |
+
## Output
|
| 352 |
+
|
| 353 |
+
**Output Type(s)**: Text
|
| 354 |
+
|
| 355 |
+
**Output Format**: String
|
| 356 |
+
|
| 357 |
+
**Output Parameters**: Text: One-dimensional (1D)
|
| 358 |
+
|
| 359 |
+
**Other Properties Related to Output**:
|
| 360 |
+
* Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
|
| 361 |
+
* Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 362 |
+
|
| 363 |
+
|
| 364 |
+
## Software Integration
|
| 365 |
+
|
| 366 |
+
**Runtime Engine(s):**
|
| 367 |
+
|
| 368 |
+
* [vLLM](https://github.com/vllm-project/vllm)
|
| 369 |
+
|
| 370 |
+
**Supported Hardware Microarchitecture Compatibility:**
|
| 371 |
+
|
| 372 |
+
* NVIDIA Blackwell
|
| 373 |
+
* NVIDIA Hopper
|
| 374 |
+
|
| 375 |
+
**Note**: We have only tested doing inference with BF16 precision.
|
| 376 |
+
|
| 377 |
+
**Operating System(s):**
|
| 378 |
+
|
| 379 |
+
* Linux (We have not tested on other operating systems.)
|
| 380 |
+
|
| 381 |
+
|
| 382 |
+
# Usage
|
| 383 |
+
|
| 384 |
+
See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
|
| 385 |
+
* Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
|
| 386 |
+
|
| 387 |
+
# Evaluation
|
| 388 |
+
|
| 389 |
+
Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
|
| 390 |
+
All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
|
| 391 |
+
|
| 392 |
+
**Data Collection Method**:
|
| 393 |
+
* RoboVQA: Hybrid: Automatic/Sensors
|
| 394 |
+
* BridgeDataV2: Automatic/Sensors
|
| 395 |
+
* AgiBot: Automatic/Sensors
|
| 396 |
+
* RoboFail: Automatic/Sensors
|
| 397 |
+
* HoloAssist: Human
|
| 398 |
+
* AV: Automatic/Sensors
|
| 399 |
+
|
| 400 |
+
**Labeling Method**:
|
| 401 |
+
* RoboVQA: Hybrid: Human,Automated
|
| 402 |
+
* BridgeDataV2: Hybrid: Human,Automated
|
| 403 |
+
* AgiBot: Hybrid: Human,Automated
|
| 404 |
+
* RoboFail: Hybrid: Human,Automated
|
| 405 |
+
* HoloAssist: Hybrid: Human,Automated
|
| 406 |
+
* AV: Hybrid: Human,Automated
|
| 407 |
+
|
| 408 |
+
**Metrics**:
|
| 409 |
+
We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
|
| 410 |
+
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Average |
|
| 411 |
+
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|
|
| 412 |
+
| **Accuracy** | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
|
| 413 |
+
|
| 414 |
+
## Dataset Format
|
| 415 |
+
Modality: Video (mp4) and Text
|
| 416 |
+
|
| 417 |
+
## Dataset Quantification
|
| 418 |
+
We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
|
| 419 |
+
**The AV data is currently unavailable and will be uploaded soon!**
|
| 420 |
+
|
| 421 |
+
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Total Storage Size |
|
| 422 |
+
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------|
|
| 423 |
+
| **SFT Data** | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | **300.6GB** |
|
| 424 |
+
| **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
|
| 425 |
+
| **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
|
| 426 |
+
|
| 427 |
+
|
| 428 |
+
|
| 429 |
+
We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
|
| 430 |
+
|
| 431 |
+
|
| 432 |
+
## Inference:
|
| 433 |
+
**Acceleration Engine:** PyTorch, flash attention <br>
|
| 434 |
+
**Test Hardware:** H100, A100, GB200 <br>
|
| 435 |
+
* Minimum 2 GPU cards, multi nodes require Infiniband / ROCE connection <br>
|
| 436 |
+
|
| 437 |
+
## Ethical Considerations
|
| 438 |
+
|
| 439 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 440 |
+
|
| 441 |
+
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
|
| 442 |
+
|
| 443 |
+
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
|
| 444 |
+
|
| 445 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
| 446 |
+
|
| 447 |
+
### Plus Plus (++) Promise
|
| 448 |
+
|
| 449 |
+
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
|
| 450 |
+
|
| 451 |
+
* Verified to comply with current applicable disclosure laws, regulations, and industry standards.
|
| 452 |
+
* Verified to comply with applicable privacy labeling requirements.
|
| 453 |
+
* Annotated to describe the collector/source (NVIDIA or a third-party).
|
| 454 |
+
* Characterized for technical limitations.
|
| 455 |
+
* Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
|
| 456 |
+
* Reviewed before release.
|
| 457 |
+
* Tagged for known restrictions and potential safety implications.
|
| 458 |
+
|
| 459 |
+
### Bias
|
| 460 |
+
|
| 461 |
+
| Field | Response |
|
| 462 |
+
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
|
| 463 |
+
| Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
|
| 464 |
+
| Measures taken to mitigate against unwanted bias: | The training video sources contain multiple physical embodiments and environments including human, car, single arm robot, bimanual robot in indoor and outdoor environments. By training on numerous and various physical interactions and curated datasets, we strive to provide a model that does not possess biases towards certain embodiments or environments. |
|
| 465 |
+
|
| 466 |
+
### Explainability
|
| 467 |
+
|
| 468 |
+
| Field | Response |
|
| 469 |
+
| :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
| 470 |
+
| Intended Application & Domain: | Physical AI Reasoning |
|
| 471 |
+
| Model Type: | Transformer |
|
| 472 |
+
| Intended Users: | Physical AI developers |
|
| 473 |
+
| Output: | Text |
|
| 474 |
+
| Describe how the model works: | Generates text answers based on input text prompt and video |
|
| 475 |
+
| Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
|
| 476 |
+
| Verified to have met prescribed NVIDIA quality standards: | Yes |
|
| 477 |
+
| Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
|
| 478 |
+
| Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
|
| 479 |
+
| Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
|
| 480 |
+
|
| 481 |
+
### Privacy
|
| 482 |
+
|
| 483 |
+
| Field | Response |
|
| 484 |
+
| :------------------------------------------------------------------ | :------------- |
|
| 485 |
+
| Generatable or reverse engineerable personal information? | None Known |
|
| 486 |
+
| Protected class data used to create this model? | None Known |
|
| 487 |
+
| Was consent obtained for any personal data used? | None Known |
|
| 488 |
+
| How often is dataset reviewed? | Before Release |
|
| 489 |
+
| Is there provenance for all datasets used in training? | Yes |
|
| 490 |
+
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
|
| 491 |
+
| Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
|
| 492 |
+
|
| 493 |
+
|
| 494 |
+
### Safety
|
| 495 |
+
|
| 496 |
+
| Field | Response |
|
| 497 |
+
| :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 498 |
+
| Model Application(s): | Physical AI common sense understanding and embodied reasoning |
|
| 499 |
+
| Describe the life critical impact (if present). | None Known |
|
| 500 |
+
| Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
|
| 501 |
+
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
|