Susant-Achary commited on
Commit
0b05ca1
·
verified ·
1 Parent(s): 1c78497

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -16
README.md CHANGED
@@ -1,23 +1,119 @@
1
  ---
2
- library_name: mlx
3
- license: other
4
- license_name: lfm1.0
5
- license_link: LICENSE
6
  language:
7
  - en
8
- - ar
9
- - zh
10
- - fr
11
- - de
12
- - ja
13
- - ko
14
- - es
15
- pipeline_tag: text-generation
16
  tags:
17
- - liquid
 
 
18
  - lfm2
19
- - edge
20
  - moe
21
- - mlx
22
- base_model: LiquidAI/LFM2-8B-A1B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ model-index:
3
+ - name: LFM2-8B-A1B — MLX (Apple Silicon), **3-bit** (with MoE + RAM planning)
4
+ results: []
 
5
  language:
6
  - en
 
 
 
 
 
 
 
 
7
  tags:
8
+ - mlx
9
+ - apple-silicon
10
+ - liquidai
11
  - lfm2
 
12
  - moe
13
+ - transformer
14
+ - long-context
15
+ - instruct
16
+ - quantized
17
+ - 3bit
18
+ - coding
19
+ pipeline_tag: text-generation
20
+ license: other
21
+ license_name: lfm1.0
22
+ license_link: LICENSE
23
+ library_name: mlx
24
+ base_model:
25
+ - LiquidAI/LFM2-8B-A1B
26
+ ---
27
+
28
+ # LFM2-8B-A1B — **MLX 3-bit** (Apple Silicon)
29
+
30
+ **Maintainer / Publisher:** [**Susant Achary**](https://huggingface.co/Susant-Achary)
31
+ **Upstream model:** [LiquidAI/LFM2-8B-A1B](https://huggingface.co/LiquidAI/LFM2-8B-A1B)
32
+ **This repo (MLX 3-bit):** `mlx-community/LFM2-8B-A1B-3bit-MLX`
33
+
34
+ This repository provides an **Apple-Silicon-optimized MLX build** of **LFM2-8B-A1B** at **3-bit** quantization.
35
+ 3-bit is an excellent **size↔quality sweet spot** on many Macs—very small memory footprint with surprisingly solid answer quality and snappy decoding.
36
+
37
+ ---
38
+
39
+ ## 🔎 What is LFM2-8B-A1B?
40
+
41
+ - **Architecture:** Mixture-of-Experts (**MoE**) Transformer.
42
+ - **Size:** ~**8B total parameters** with ~**1B active** per token (the “A1B” naming commonly indicates ~1B active params).
43
+ - **Why MoE?** Per token, only a subset of experts is activated → **lower compute per token** while retaining a larger parameter pool for expressivity.
44
+
45
+ > **Memory reality on a single device:** Even though ~1B parameters are *active* at a time, **all experts typically reside in memory** in single-device runs. Plan **RAM** based on **total parameters**, not just the active slice.
46
+
47
+ ---
48
+
49
+ ## 📦 What’s in this MLX build
50
+
51
+ - `config.json` (MLX), `mlx_model*.safetensors` (**3-bit** shards)
52
+ - Tokenizer: `tokenizer.json`, `tokenizer_config.json`
53
+ - Metadata: `model_index.json` (and/or processor metadata as applicable)
54
+
55
+ Target: **macOS** on **Apple Silicon (M-series)** using **Metal/MPS**.
56
+
57
+ ---
58
+
59
+ ## ✅ Intended use
60
+
61
+ - General **instruction following**, chat, and summarization
62
+ - **RAG** back-ends and long-context assistants on device
63
+ - **Schema-guided** structured outputs (JSON) where low RAM is a priority
64
+
65
+ ## ⚠️ Limitations
66
+
67
+ - 3-bit is **lossy**: tiny improvements in latency/RAM come with some accuracy trade-off vs 6/8-bit.
68
+ - For very long contexts and/or batching, **KV-cache** can dominate memory—tune `max_tokens` and batch size.
69
+ - Add your own **guardrails/safety** for production deployments.
70
+
71
+ ---
72
+
73
+ ## 🔢 RAM planning (3-bit, MoE, MLX)
74
+
75
+ You asked to **assume and decide** realistic ranges. The numbers below are **practical starting points**—verify on your machine.
76
+
77
+ ### Rule-of-thumb components
78
+
79
+ - **Weights (3-bit):** ≈ `total_params × 0.375 byte` → for **8B params ≈ ~3.0 GB**
80
+ - **Runtime overhead:** MLX graph/tensors/metadata → **~0.6–1.0 GB**
81
+ - **KV-cache:** grows with **context × layers × heads × dtype** → **~0.8–2.5+ GB**
82
+
83
+ ### Indicative peak RAM (batch=1)
84
+
85
+ | Context window | Estimated peak RAM |
86
+ |---|---:|
87
+ | **4k tokens** | **~4.4–5.5 GB** |
88
+ | **8k tokens** | **~5.2–6.6 GB** |
89
+ | **16k tokens** | **~6.5–8.8 GB** |
90
+
91
+ > For ≤2k windows you may see **~4.0–4.8 GB**. Larger windows/batches increase KV-cache and peak RAM.
92
+
93
+ ---
94
+
95
+ ## 🧭 Precision choices for LFM2-8B-A1B (lineup planning)
96
+
97
+ While this card is **3-bit**, teams often publish multiple precisions. Use this table as a **planning guide** (8B MoE LM; actuals depend on context/batch/prompts):
98
+
99
+ | Variant | Typical Peak RAM | Relative Speed | Typical Behavior | When to choose |
100
+ |---|---:|:---:|---|---|
101
+ | **3-bit** *(this repo)* | **~4.4–8.8 GB** | **🔥🔥🔥🔥** | **Direct, concise**, great latency | **Default** on 8–16 GB Macs |
102
+ | **6-bit** | ~7.5–12.5 GB | 🔥🔥 | Best quality under quant | Choose if RAM allows |
103
+ | **8-bit** | ~9.5–12+ GB | 🔥🔥 | Largest quantized size / highest fidelity | When you prefer simpler 8-bit workflows |
104
+
105
+ > **MoE caveat:** MoE lowers **compute per token**; unless experts are **paged/partitioned**, **memory** still scales with **total parameters** on a single device.
106
+
107
  ---
108
+
109
+ ## 🚀 Quickstart (CLI — MLX)
110
+
111
+ **Deterministic generation**
112
+ ```bash
113
+ python -m mlx_lm.generate \
114
+ --model mlx-community/LFM2-8B-A1B-3bit-MLX \
115
+ --prompt "Summarize the following in 5 concise bullet points:\n<your text>" \
116
+ --max-tokens 256 \
117
+ --temperature 0.0 \
118
+ --device mps \
119
+ --seed 0