omarkamali commited on
Commit
fcc6b63
·
verified ·
1 Parent(s): 152b2ba

Upload all models and assets for ban (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +310 -163
  2. models/embeddings/monolingual/ban_128d.bin +2 -2
  3. models/embeddings/monolingual/ban_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/ban_32d.bin +2 -2
  5. models/embeddings/monolingual/ban_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/ban_64d.bin +2 -2
  7. models/embeddings/monolingual/ban_64d_metadata.json +5 -3
  8. models/subword_markov/ban_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/ban_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/ban_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/ban_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/ban_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/ban_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/ban_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/ban_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/ban_2gram_subword.parquet +2 -2
  17. models/subword_ngram/ban_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/ban_3gram_subword.parquet +2 -2
  19. models/subword_ngram/ban_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/ban_4gram_subword.parquet +2 -2
  21. models/subword_ngram/ban_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/ban_tokenizer_16k.model +2 -2
  23. models/tokenizer/ban_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/ban_tokenizer_32k.model +2 -2
  25. models/tokenizer/ban_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/ban_tokenizer_64k.model +2 -2
  27. models/tokenizer/ban_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/ban_tokenizer_8k.model +2 -2
  29. models/tokenizer/ban_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/ban_vocabulary.parquet +2 -2
  31. models/vocabulary/ban_vocabulary_metadata.json +10 -9
  32. models/word_markov/ban_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/ban_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/ban_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/ban_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/ban_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/ban_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/ban_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/ban_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/ban_2gram_word.parquet +2 -2
  41. models/word_ngram/ban_2gram_word_metadata.json +2 -2
  42. models/word_ngram/ban_3gram_word.parquet +2 -2
  43. models/word_ngram/ban_3gram_word_metadata.json +2 -2
  44. models/word_ngram/ban_4gram_word.parquet +2 -2
  45. models/word_ngram/ban_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.782
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8612
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 109825
33
- generated: 2025-12-27
34
  ---
35
 
36
  # BAN - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,81 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.889x | 3.84 | 0.1469% | 269,485 |
76
- | **16k** | 4.255x | 4.21 | 0.1608% | 246,312 |
77
- | **32k** | 4.547x | 4.49 | 0.1718% | 230,479 |
78
- | **64k** | 4.782x 🏆 | 4.73 | 0.1807% | 219,125 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `1020
85
-
86
- 1021
87
-
88
- 1022
89
-
90
- 1023
91
-
92
- 1024
93
-
94
- 1025
95
-
96
- 1026
97
-
98
- 1027
99
-
100
- 1028
101
-
102
- 1029
103
-
104
- Jadma
105
-
106
- Embas
107
-
108
- Seda
109
-
110
- ...`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
- | 8k | `▁ 1 0 2 01 0 2 1 ... (+53 more)` | 63 |
115
- | 16k | `▁ 1 0 2 01 0 2 1 ... (+53 more)` | 63 |
116
- | 32k | `▁ 1 0 2 01 0 2 1 ... (+53 more)` | 63 |
117
- | 64k | `▁ 1 0 2 0 1 0 2 1 ... (+53 more)` | 63 |
118
-
119
- **Sample 2:** `Pustaka
120
 
121
- Pranala liyané
122
-
123
- Kategori:Abad ka-17`
124
 
125
  | Vocab | Tokens | Count |
126
  |-------|--------|-------|
127
- | 8k | `▁pustakapranalaliyanékategori : abadka - 1 7` | 10 |
128
- | 16k | `▁pustakapranalaliyanékategori : abadka - 1 7` | 10 |
129
- | 32k | `▁pustakapranalaliyanékategori : abadka - 1 7` | 10 |
130
- | 64k | `▁pustakapranalaliyanékategori : abadka - 1 7` | 10 |
131
 
132
- **Sample 3:** `Siung Sri Lanka (Gracula ptilogenys), inggih punika satunggil curik, anggota kul...`
133
 
134
  | Vocab | Tokens | Count |
135
  |-------|--------|-------|
136
- | 8k | `▁si ung ▁sri ▁lan ka ▁( gr ac ula ▁p ... (+24 more)` | 34 |
137
- | 16k | `▁si ung ▁sri ▁lanka ▁( gr ac ula ▁p til ... (+21 more)` | 31 |
138
- | 32k | `▁siungsri ▁lanka ▁( gr ac ulap til ogen ... (+19 more)` | 29 |
139
- | 64k | `▁siungsri ▁lanka ▁( graculaptil ogen ys ),inggih ... (+15 more)` | 25 |
140
 
141
 
142
  ### Key Findings
143
 
144
- - **Best Compression:** 64k achieves 4.782x compression
145
- - **Lowest UNK Rate:** 8k with 0.1469% unknown tokens
146
  - **Trade-off:** Larger vocabularies improve compression but increase model size
147
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
148
 
@@ -151,57 +129,89 @@ Kategori:Abad ka-17`
151
 
152
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
153
 
 
 
154
  ![N-gram Coverage](visualizations/ngram_coverage.png)
155
 
156
  ### Results
157
 
158
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
159
- |--------|------------|---------|----------------|------------------|-------------------|
160
- | **2-gram** | 6,772 🏆 | 12.73 | 86,017 | 32.0% | 53.5% |
161
- | **2-gram** | 287 🏆 | 8.17 | 6,739 | 67.5% | 98.5% |
162
- | **3-gram** | 9,433 | 13.20 | 132,180 | 30.5% | 50.3% |
163
- | **3-gram** | 2,255 | 11.14 | 56,338 | 28.2% | 73.8% |
164
- | **4-gram** | 14,846 | 13.86 | 212,984 | 26.8% | 45.5% |
165
- | **4-gram** | 10,513 | 13.36 | 295,874 | 17.0% | 50.1% |
166
 
167
  ### Top 5 N-grams by Size
168
 
169
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
- | 1 | `kategori :` | 56,343 |
174
- | 2 | `situs resmi` | 43,670 |
175
- | 3 | `inggih punika` | 39,156 |
176
- | 4 | `pusat statistik` | 24,773 |
177
- | 5 | `badan pusat` | 24,763 |
178
 
179
- **3-grams:**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
- | 1 | `badan pusat statistik` | 24,761 |
184
- | 2 | `pustaka pranala jaba` | 21,699 |
185
- | 3 | `) inggih punika` | 21,548 |
186
- | 4 | `inggih punika silih` | 20,523 |
187
- | 5 | `punika silih tunggil` | 20,157 |
188
 
189
- **4-grams:**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
- | 1 | `inggih punika silih tunggil` | 20,047 |
194
- | 2 | `pranala jaba situs resmi` | 19,038 |
195
- | 3 | `pustaka pranala jaba situs` | 18,670 |
196
- | 4 | `) inggih punika silih` | 18,246 |
197
- | 5 | `( aksara bali :` | 17,893 |
 
 
 
 
 
 
 
 
 
 
198
 
199
 
200
  ### Key Findings
201
 
202
- - **Best Perplexity:** 2-gram with 287
203
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
204
- - **Coverage:** Top-1000 patterns cover ~50% of corpus
205
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
206
 
207
  ---
@@ -209,55 +219,86 @@ Kategori:Abad ka-17`
209
 
210
  ![Markov Entropy](visualizations/markov_entropy.png)
211
 
 
 
212
  ![Markov Branching](visualizations/markov_branching.png)
213
 
214
  ### Results
215
 
216
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
217
- |---------|-------------|------------|------------------|-----------------|----------------|
218
- | **1** | 0.5448 | 1.459 | 4.25 | 354,734 | 45.5% |
219
- | **1** | 1.0973 | 2.140 | 6.84 | 3,094 | 0.0% |
220
- | **2** | 0.2539 | 1.192 | 1.68 | 1,504,653 | 74.6% |
221
- | **2** | 0.8467 | 1.798 | 5.47 | 21,162 | 15.3% |
222
- | **3** | 0.0992 | 1.071 | 1.20 | 2,520,521 | 90.1% |
223
- | **3** | 0.8884 | 1.851 | 4.45 | 115,750 | 11.2% |
224
- | **4** | 0.0443 🏆 | 1.031 | 1.08 | 3,008,321 | 95.6% |
225
- | **4** | 0.7379 🏆 | 1.668 | 3.16 | 515,141 | 26.2% |
226
 
227
- ### Generated Text Samples
228
 
229
- Below are text samples generated from each Markov chain model:
230
 
231
  **Context Size 1:**
232
 
233
- 1. `, seperti kota binjai kategori : désa dinas sané mangkin madué 10 désa pakraman buléléng .`
234
- 2. `. iklan di pulo kyushu . akéhnyané 1 . kategori : ᬕᬮ ᭄ ᬤ`
235
- 3. `ring warsa 2019 , definisi definisi asli riantara 24 / ilang . there is defined hypnosis`
236
 
237
  **Context Size 2:**
238
 
239
- 1. `kategori : kota kendari wali kota ngawit jabatan saking pinanggal 22 pébruari 1857 – 1 al -`
240
- 2. `situs resmi pamréntahan kabupatén tuban cutetan : pranala dados kauahin / ilang . yening url nenten ...`
241
- 3. `inggih punika silih tunggil sanganan sané nénten pastika sakéwanten sumber akéh saking cina , itsĕrl...`
242
 
243
  **Context Size 3:**
244
 
245
- 1. `badan pusat statistik kepulauan bangka belitung badan pusat statistik kota surabaya cutetan : url da...`
246
- 2. `pustaka pranala jaba taman pahlawan margarana , ring pamahbah nyané , kain sasirangan kapercaya pras...`
247
- 3. `) inggih punika silih tunggil kecamatan ring kabupatén bungo , propinsi jambi , ring panegara indoné...`
248
 
249
  **Context Size 4:**
250
 
251
- 1. `inggih punika silih tunggil kecamatan ring kabupatén gowa , propinsi sulawesi selatan tanjung batu ,...`
252
- 2. `pranala jaba situs resmi pemerintah kota tangerang situs resmi bps kota tangerang cutetan : url dado...`
253
- 3. `pustaka pranala jaba situs resmi pamréntahan nusa tenggara barat badan pusat statistik sumatra utara...`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
 
255
 
256
  ### Key Findings
257
 
258
- - **Best Predictability:** Context-4 with 95.6% predictability
259
  - **Branching Factor:** Decreases with context size (more deterministic)
260
- - **Memory Trade-off:** Larger contexts require more storage (515,141 contexts)
261
  - **Recommendation:** Context-3 or Context-4 for text generation
262
 
263
  ---
@@ -273,64 +314,64 @@ Below are text samples generated from each Markov chain model:
273
 
274
  | Metric | Value |
275
  |--------|-------|
276
- | Vocabulary Size | 109,825 |
277
- | Total Tokens | 4,059,826 |
278
- | Mean Frequency | 36.97 |
279
  | Median Frequency | 3 |
280
- | Frequency Std Dev | 763.95 |
281
 
282
  ### Most Common Words
283
 
284
  | Rank | Word | Frequency |
285
  |------|------|-----------|
286
- | 1 | ring | 133,380 |
287
- | 2 | kabupatén | 67,955 |
288
- | 3 | kategori | 56,442 |
289
- | 4 | punika | 52,655 |
290
- | 5 | situs | 48,035 |
291
- | 6 | sané | 47,128 |
292
- | 7 | resmi | 44,824 |
293
- | 8 | kecamatan | 42,212 |
294
- | 9 | inggih | 39,593 |
295
- | 10 | saking | 39,394 |
296
 
297
  ### Least Common Words (from vocabulary)
298
 
299
  | Rank | Word | Frequency |
300
  |------|------|-----------|
301
- | 1 | padaido | 2 |
302
- | 2 | inswambesi | 2 |
303
- | 3 | asaryendi | 2 |
304
- | 4 | sopendo | 2 |
305
- | 5 | pomdori | 2 |
306
- | 6 | yawosi | 2 |
307
- | 7 | ᬧᬓᬓ | 2 |
308
- | 8 | potrekwastanngawit | 2 |
309
- | 9 | patonangi | 2 |
310
- | 10 | ᬢᬢᬓᬦ | 2 |
311
 
312
  ### Zipf's Law Analysis
313
 
314
  | Metric | Value |
315
  |--------|-------|
316
- | Zipf Coefficient | 1.1336 |
317
- | R² (Goodness of Fit) | 0.997567 |
318
  | Adherence Quality | **excellent** |
319
 
320
  ### Coverage Analysis
321
 
322
  | Top N Words | Coverage |
323
  |-------------|----------|
324
- | Top 100 | 43.3% |
325
- | Top 1,000 | 67.9% |
326
- | Top 5,000 | 82.1% |
327
- | Top 10,000 | 87.0% |
328
 
329
  ### Key Findings
330
 
331
- - **Zipf Compliance:** R²=0.9976 indicates excellent adherence to Zipf's law
332
- - **High Frequency Dominance:** Top 100 words cover 43.3% of corpus
333
- - **Long Tail:** 99,825 words needed for remaining 13.0% coverage
334
 
335
  ---
336
  ## 5. Word Embeddings Evaluation
@@ -343,24 +384,127 @@ Below are text samples generated from each Markov chain model:
343
 
344
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
345
 
346
- ### Model Comparison
347
 
348
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
349
- |-------|------------|-----------|----------|----------|----------|
350
- | **mono_32d** | 50,333 | 32 | 4.290 | 1.041 | 0.8612 🏆 |
351
- | **mono_64d** | 50,333 | 64 | 4.879 | 1.017 | 0.8485 |
352
- | **mono_128d** | 50,333 | 128 | 5.532 | 0.920 | 0.8053 |
353
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
354
 
355
  ### Key Findings
356
 
357
- - **Best Isotropy:** mono_32d with 0.8612 (more uniform distribution)
358
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
359
- - **Vocabulary Coverage:** All models cover 50,333 words
360
- - **Recommendation:** 100d for balanced semantic capture and efficiency
361
 
362
  ---
363
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
364
 
365
  ![Performance Dashboard](visualizations/performance_dashboard.png)
366
 
@@ -368,11 +512,12 @@ Below are text samples generated from each Markov chain model:
368
 
369
  | Component | Recommended | Rationale |
370
  |-----------|-------------|-----------|
371
- | Tokenizer | **32k BPE** | Best compression (4.78x) with low UNK rate |
372
- | N-gram | **5-gram** | Lowest perplexity (287) |
373
- | Markov | **Context-4** | Highest predictability (95.6%) |
374
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
375
 
 
376
  ---
377
  ## Appendix: Metrics Glossary & Interpretation Guide
378
 
@@ -562,7 +707,8 @@ If you use these models in your research, please cite:
562
  author = {Kamali, Omar},
563
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
564
  year = {2025},
565
- publisher = {HuggingFace},
 
566
  url = {https://huggingface.co/wikilangs}
567
  institution = {Omneity Labs}
568
  }
@@ -578,7 +724,8 @@ MIT License - Free for academic and commercial use.
578
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
579
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
580
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
581
  ---
582
  *Generated by Wikilangs Models Pipeline*
583
 
584
- *Report Date: 2025-12-27 23:53:08*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 5.077
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8530
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # BAN - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 4.073x | 4.08 | 0.1890% | 240,149 |
84
+ | **16k** | 4.474x | 4.48 | 0.2076% | 218,639 |
85
+ | **32k** | 4.813x | 4.82 | 0.2234% | 203,246 |
86
+ | **64k** | 5.077x 🏆 | 5.08 | 0.2356% | 192,667 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Hamm (, Latin: Hammona) inggih punika kota ring Rhine-Westphalia Kalér, Jerman.`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁ham m ▁(, ▁latin :ham m ona ) ▁inggih ... (+12 more)` | 22 |
97
+ | 16k | `▁ham m ▁(, ▁latin :ham m ona ) ▁inggih ... (+10 more)` | 20 |
98
+ | 32k | `▁ham m ▁(, ▁latin :ham m ona ) ▁inggih ... (+10 more)` | 20 |
99
+ | 64k | `▁hamm ▁(, ▁latin :hamm ona ) ▁inggih ▁punika ▁kota ... (+8 more)` | 18 |
 
 
100
 
101
+ **Sample 2:** `Kharkiv (), utawi Kharkov () inggih punika kota pinih ageng kakalih ring Ukraina...`
 
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁kh ark iv (),utawikh ark ov() ▁inggih ... (+24 more)` | 34 |
106
+ | 16k | `▁kh ark iv (),utawikh ark ov() ▁inggih ... (+22 more)` | 32 |
107
+ | 32k | `▁kh ark iv (),utawikh ark ov() ▁inggih ... (+22 more)` | 32 |
108
+ | 64k | `▁kharkiv(),utawikhark ov ▁()inggih ▁punika ▁kota ▁pinih ... (+15 more)` | 25 |
109
 
110
+ **Sample 3:** `Brasília (;"Brasilia" (US) tur ) inggih punika ibu kota saking Brasil. Pustaka`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁br as í l ia ▁(; " br asil ia ... (+14 more)` | 24 |
115
+ | 16k | `▁br as í lia ▁(; " br asil ia " ... (+13 more)` | 23 |
116
+ | 32k | `▁brasília(; " br asil ia "( us ) ... (+10 more)` | 20 |
117
+ | 64k | `▁brasília(;" brasil ia "( us ) ▁tur ▁) ... (+8 more)` | 18 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 5.077x compression
123
+ - **Lowest UNK Rate:** 8k with 0.1890% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 4,798 | 12.23 | 59,688 | 35.6% | 57.3% |
141
+ | **2-gram** | Subword | 225 🏆 | 7.81 | 7,788 | 73.4% | 99.2% |
142
+ | **3-gram** | Word | 5,769 | 12.49 | 77,113 | 33.5% | 55.7% |
143
+ | **3-gram** | Subword | 1,669 | 10.70 | 42,522 | 31.2% | 79.1% |
144
+ | **4-gram** | Word | 8,680 | 13.08 | 116,715 | 28.6% | 51.0% |
145
+ | **4-gram** | Subword | 7,684 | 12.91 | 208,144 | 18.1% | 53.6% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `situs resmi` | 41,099 |
154
+ | 2 | `inggih punika` | 37,495 |
155
+ | 3 | `silih tunggil` | 22,082 |
156
+ | 4 | `pranala jaba` | 21,960 |
157
+ | 5 | `pusat statistik` | 21,725 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `badan pusat statistik` | 21,708 |
164
+ | 2 | `pustaka pranala jaba` | 20,507 |
165
+ | 3 | `inggih punika silih` | 19,377 |
166
+ | 4 | `punika silih tunggil` | 19,020 |
167
+ | 5 | `pranala jaba situs` | 17,860 |
168
+
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `inggih punika silih tunggil` | 18,913 |
174
+ | 2 | `pranala jaba situs resmi` | 17,672 |
175
+ | 3 | `pustaka pranala jaba situs` | 17,290 |
176
+ | 4 | `dados kauahin ilang yening` | 14,166 |
177
+ | 5 | `kauahin ilang yening url` | 13,881 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | `a n` | 880,577 |
184
+ | 2 | `n g` | 735,053 |
185
+ | 3 | `a _` | 536,413 |
186
+ | 4 | `i n` | 523,219 |
187
+ | 5 | `n _` | 516,092 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `n g _` | 361,156 |
194
+ | 2 | `a n _` | 287,413 |
195
+ | 3 | `i n g` | 287,067 |
196
+ | 4 | `a n g` | 219,608 |
197
+ | 5 | `_ k a` | 213,760 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `i n g _` | 219,518 |
204
+ | 2 | `r i n g` | 145,165 |
205
+ | 3 | `_ r i n` | 128,090 |
206
+ | 4 | `a n g _` | 86,655 |
207
+ | 5 | `u n i k` | 72,566 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 225
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~54% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.7212 | 1.649 | 5.13 | 253,714 | 27.9% |
231
+ | **1** | Subword | 0.9714 | 1.961 | 7.03 | 4,633 | 2.9% |
232
+ | **2** | Word | 0.2297 | 1.173 | 1.53 | 1,298,868 | 77.0% |
233
+ | **2** | Subword | 0.6107 | 1.527 | 3.55 | 32,560 | 38.9% |
234
+ | **3** | Word | 0.0749 | 1.053 | 1.14 | 1,983,308 | 92.5% |
235
+ | **3** | Subword | 0.5954 | 1.511 | 3.32 | 115,474 | 40.5% |
236
+ | **4** | Word | 0.0289 🏆 | 1.020 | 1.05 | 2,240,261 | 97.1% |
237
+ | **4** | Subword | 0.6610 | 1.581 | 2.96 | 383,801 | 33.9% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `ring warsa puniki dados kauahin ilang yening url dados kaapus saking sistem ekologi dan bedah langsu...`
246
+ 2. `kabupatén kediri jawa timur pustaka pranala jaba situs resmi pamréntahan wali ngancan ngamokohang ba...`
247
+ 3. `punika silih tunggil désa ring thailand punika wenten ring sérial mabasis ring wewidangan kecamatan ...`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `situs resmi pamréntahan kabupatén bima badan pusat statistik kota bengkulu badan pusat statistik pro...`
252
+ 2. `inggih punika silih sinunggil gendingan tradisional thailand sane pinih sering kacingak pinaka gerha...`
253
+ 3. `silih tunggil pagending tur ngamedalang surat kaputusan nomor sadurugnyane ring warsa akéh kramanyan...`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `badan pusat statistik propinsi jawa tengah indonésia mawit saking pérméndagri nomor 137 warsa indik ...`
258
+ 2. `pustaka pranala jaba situs resmi propinsi bali badan pusat statistik propinsi kalimantan selatan bad...`
259
+ 3. `inggih punika silih tunggil kecamatan ring kabupatén timor tengah utara ring nusa tenggara timur bad...`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `inggih punika silih tunggil désa ring kecamatan pulau pulau kur tual propinsi maluku indonésia pusta...`
264
+ 2. `pranala jaba situs resmi pamrentahan provinsi kepulauan bangka belitung badan pusat statistik kabupa...`
265
+ 3. `pustaka pranala jaba situs resmi pamrentahan provinsi kepulauan bangka belitung badan pusat statisti...`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `a._gané_l_i,_dan`
275
+ 2. `_ptrandopi_mi_ba`
276
+ 3. `n_107_sika,_dika`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `angaing_wawewidué`
281
+ 2. `ng_gu_kin_éman_no`
282
+ 3. `a_]_garingang_lat`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `ng_kabupatén_sané_`
287
+ 2. `an_kaapustaka_miwa`
288
+ 3. `inggih_tunggih_pas`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `ing_basa_badan_pran`
293
+ 2. `ring_kaapus_sané_ri`
294
+ 3. `_ring_kabupatén_kah`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 97.1% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (383,801 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 96,177 |
318
+ | Total Tokens | 3,540,495 |
319
+ | Mean Frequency | 36.81 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 739.04 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | ring | 127,899 |
328
+ | 2 | kabupatén | 58,514 |
329
+ | 3 | punika | 50,657 |
330
+ | 4 | sané | 45,835 |
331
+ | 5 | situs | 44,988 |
332
+ | 6 | resmi | 42,224 |
333
+ | 7 | inggih | 37,927 |
334
+ | 8 | saking | 37,341 |
335
+ | 9 | url | 32,061 |
336
+ | 10 | miwah | 31,507 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | kitou | 2 |
343
+ | 2 | sialet | 2 |
344
+ | 3 | dibanda | 2 |
345
+ | 4 | ᬦᬶᬲᬫ᭄ | 2 |
346
+ | 5 | reuba | 2 |
347
+ | 6 | reuleut | 2 |
348
+ | 7 | rheue | 2 |
349
+ | 8 | uleue | 2 |
350
+ | 9 | muling | 2 |
351
+ | 10 | sanderling | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 1.1306 |
358
+ | R² (Goodness of Fit) | 0.997983 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 44.6% |
366
+ | Top 1,000 | 68.9% |
367
+ | Top 5,000 | 82.9% |
368
+ | Top 10,000 | 87.9% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9980 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 44.6% of corpus
374
+ - **Long Tail:** 86,177 words needed for remaining 12.1% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8530 🏆 | 0.3516 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.8495 | 0.2832 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.8092 | 0.2232 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8530 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2860. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-ma` | martins, masduki, maffin |
430
+ | `-ka` | kaméloh, kaaranin, kasum |
431
+ | `-pa` | palopat, panandatanganan, pail |
432
+ | `-pe` | peting, pencok, pemantauan |
433
+
434
+ #### Productive Suffixes
435
+ | Suffix | Examples |
436
+ |--------|----------|
437
+ | `-n` | baharuddin, setyawan, roussillon |
438
+ | `-an` | setyawan, panandatanganan, mengupayakan |
439
+ | `-ng` | peting, speaking, sanderling |
440
+ | `-ang` | tenggarang, lendang, nguwahang |
441
+ | `-né` | leluhurnyané, putranidané, bébékné |
442
+
443
+ ### 6.3 Bound Stems (Lexical Roots)
444
+
445
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
446
+
447
+ | Stem | Cohesion | Substitutability | Examples |
448
+ |------|----------|------------------|----------|
449
+ | `anga` | 1.47x | 361 contexts | nanga, sanga, hanga |
450
+ | `ngan` | 1.54x | 182 contexts | angan, ingan, tengan |
451
+ | `nten` | 1.71x | 86 contexts | inten, enten, wnten |
452
+ | `atan` | 1.52x | 149 contexts | vatan, gatan, matan |
453
+ | `ungg` | 1.55x | 117 contexts | tungg, ungga, unggun |
454
+ | `akin` | 1.88x | 41 contexts | aking, yakin, dakin |
455
+ | `nggi` | 1.58x | 73 contexts | anggi, nggih, senggi |
456
+ | `taha` | 1.90x | 32 contexts | tahai, tahap, tahan |
457
+ | `ggih` | 2.03x | 22 contexts | nggih, lnggih, inggih |
458
+ | `ados` | 2.01x | 22 contexts | dados, sados, padosa |
459
+ | `isti` | 1.61x | 36 contexts | bistik, sistim, pistia |
460
+ | `cama` | 1.87x | 19 contexts | camat, camas, camah |
461
+
462
+ ### 6.4 Affix Compatibility (Co-occurrence)
463
+
464
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
465
+
466
+ | Prefix | Suffix | Frequency | Examples |
467
+ |--------|--------|-----------|----------|
468
+ | `-pa` | `-n` | 112 words | palimunan, pawacanan |
469
+ | `-ka` | `-n` | 112 words | kamerdékaan, kagenahin |
470
+ | `-pa` | `-an` | 96 words | palimunan, pawacanan |
471
+ | `-pe` | `-n` | 92 words | perhubungan, penyaringan |
472
+ | `-pe` | `-an` | 81 words | perhubungan, penyaringan |
473
+ | `-ka` | `-ng` | 77 words | kaidipang, kawedharang |
474
+ | `-ka` | `-ang` | 61 words | kaidipang, kawedharang |
475
+ | `-ka` | `-an` | 56 words | kamerdékaan, kamaharajan |
476
+ | `-ma` | `-n` | 55 words | marepan, mapitungan |
477
+ | `-ma` | `-an` | 39 words | marepan, mapitungan |
478
+
479
+ ### 6.5 Recursive Morpheme Segmentation
480
+
481
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
482
+
483
+ | Word | Suggested Split | Confidence | Stem |
484
+ |------|-----------------|------------|------|
485
+ | kauningan | **`ka-uning-an`** | 6.0 | `uning` |
486
+ | kaorganisasiang | **`ka-organisasi-ang`** | 6.0 | `organisasi` |
487
+ | kakaonang | **`ka-ka-onang`** | 6.0 | `onang` |
488
+ | pasilihan | **`pa-silih-an`** | 6.0 | `silih` |
489
+ | kajahatan | **`ka-jahat-an`** | 6.0 | `jahat` |
490
+ | kasedukan | **`ka-seduk-an`** | 6.0 | `seduk` |
491
+ | kalaporang | **`ka-lapor-ang`** | 6.0 | `lapor` |
492
+ | kakuasaan | **`ka-kuasa-an`** | 6.0 | `kuasa` |
493
+ | padruwénan | **`pa-druwén-an`** | 6.0 | `druwén` |
494
+ | palekadan | **`pa-lekad-an`** | 6.0 | `lekad` |
495
+ | mategakan | **`ma-tegak-an`** | 6.0 | `tegak` |
496
+ | kaungkabang | **`ka-ungkab-ang`** | 6.0 | `ungkab` |
497
+ | kauwugang | **`ka-uwug-ang`** | 6.0 | `uwug` |
498
+ | panyambung | **`pa-nyambu-ng`** | 6.0 | `nyambu` |
499
+ | panularan | **`pa-nular-an`** | 6.0 | `nular` |
500
+
501
+ ### 6.6 Linguistic Interpretation
502
+
503
+ > **Automated Insight:**
504
+ The language BAN appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
505
+
506
+ ---
507
+ ## 7. Summary & Recommendations
508
 
509
  ![Performance Dashboard](visualizations/performance_dashboard.png)
510
 
 
512
 
513
  | Component | Recommended | Rationale |
514
  |-----------|-------------|-----------|
515
+ | Tokenizer | **64k BPE** | Best compression (5.08x) |
516
+ | N-gram | **2-gram** | Lowest perplexity (225) |
517
+ | Markov | **Context-4** | Highest predictability (97.1%) |
518
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
519
 
520
+
521
  ---
522
  ## Appendix: Metrics Glossary & Interpretation Guide
523
 
 
707
  author = {Kamali, Omar},
708
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
709
  year = {2025},
710
+ doi = {10.5281/zenodo.18073153},
711
+ publisher = {Zenodo},
712
  url = {https://huggingface.co/wikilangs}
713
  institution = {Omneity Labs}
714
  }
 
724
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
725
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
726
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
727
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
728
  ---
729
  *Generated by Wikilangs Models Pipeline*
730
 
731
+ *Report Date: 2026-01-03 06:12:56*
models/embeddings/monolingual/ban_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea87c1477ff7362a05ee5aecc04514daeeb0751d9258d36096220826e2071195
3
- size 1076400925
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdf8bd41f143ec229b2b83e523def59ca3894f5e32c88dc9ad1d5777bb3cdbc1
3
+ size 1069224476
models/embeddings/monolingual/ban_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 50333
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 43447
15
  }
models/embeddings/monolingual/ban_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b78c8beda9adf946cfb61ebe5d7345f1a46cfbada20e008894b25148ee2b7fb6
3
- size 269745181
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c7b5b04af24d58eb2352605fd5a6a3884aa17bd5211923840f1d2399995b3dc
3
+ size 267857180
models/embeddings/monolingual/ban_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 50333
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 43447
15
  }
models/embeddings/monolingual/ban_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac247ae04637df7c57b93803868d52ed47fb91b34dd674de9b86612a1856543d
3
- size 538630429
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45be3ee34bd2a796319b97a63a3801529c7d2b0804d891a90aa7697ac777fcda
3
+ size 534979612
models/embeddings/monolingual/ban_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 50333
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 43447
15
  }
models/subword_markov/ban_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07d34933ba1ea350752ae85b877a8c59815d819177bf4c06cc1c33c52a23a4ab
3
- size 164464
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b557a43cbf89ec6e5b8fe521107f375a803257117295fa8f2f26d54017e944ef
3
+ size 238064
models/subword_markov/ban_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_contexts": 3094,
6
- "total_transitions": 31281701
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_contexts": 4633,
6
+ "total_transitions": 26072638
7
  }
models/subword_markov/ban_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a094b1a8d92237ba3ee5d4bb02c77daf556d4e57c92818f8594ee12d0d9dba7
3
- size 930375
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc162e729fc9f1b78bdc516d217698eb4c1e8af546cf0d4b64801623d1296b5e
3
+ size 1052622
models/subword_markov/ban_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_contexts": 21162,
6
- "total_transitions": 31247141
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_contexts": 32560,
6
+ "total_transitions": 26039719
7
  }
models/subword_markov/ban_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c14ef4a97de8adc59b7a39da2f9d5cdcbdc15c1339b56d4fe8f54919ee7676cc
3
- size 3747847
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62c18ce24bb55f6d1333346623229b9025cc0f78b411f131da48dbb5f6b5bfaf
3
+ size 3150588
models/subword_markov/ban_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_contexts": 115750,
6
- "total_transitions": 31212581
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_contexts": 115474,
6
+ "total_transitions": 26006800
7
  }
models/subword_markov/ban_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:83b89243df0423d04c3f14b54a3854b87b3df8860b6528722a527ca63138bedf
3
- size 12242821
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:beb82bbbccc1e498763133c2ca43a33d34399701a3feadc33b33d7e28db10c2d
3
+ size 9163427
models/subword_markov/ban_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_contexts": 515141,
6
- "total_transitions": 31178021
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_contexts": 383801,
6
+ "total_transitions": 25973881
7
  }
models/subword_ngram/ban_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1d331dea0ac3863010c7f44bc8af6db64163208b39797a90da3a3db2cb5068c2
3
- size 89432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8d06b7f0eea6caff12eb48816bcfb2e270b3b2491f26727e3f0514ff35fc746
3
+ size 106802
models/subword_ngram/ban_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_ngrams": 6739,
6
- "total_ngrams": 31281701
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_ngrams": 7788,
6
+ "total_ngrams": 26072638
7
  }
models/subword_ngram/ban_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e87314f3eb227a47badf79f1a2964e00acef96da0cc33f99259523ddadbe0267
3
- size 689305
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9343f458518ed0d1f0a407a342e9b5cecbee42b529784e0dcb0b72918b79d1d3
3
+ size 564805
models/subword_ngram/ban_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_ngrams": 56338,
6
- "total_ngrams": 31247141
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_ngrams": 42522,
6
+ "total_ngrams": 26039719
7
  }
models/subword_ngram/ban_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8fe02f497267ca92b94af3687493cf750310e35ab85b736752b26ff56e51a295
3
- size 3233720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5c1474de28fe82c885eb32b731cad7f32987a3ef6b82a9ab47b3b48c9e2ce1a
3
+ size 2348974
models/subword_ngram/ban_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ban",
5
- "unique_ngrams": 295874,
6
- "total_ngrams": 31212581
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ban",
5
+ "unique_ngrams": 208144,
6
+ "total_ngrams": 26006800
7
  }
models/tokenizer/ban_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bedf8563a1afe2051346993a2e31dc66fa70f8ddf79e172628e2963e0a99dd47
3
- size 503307
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0194959437d4a8b04b5d3f37d4b95caf926ecb7c4136e157e06f185417710348
3
+ size 507366
models/tokenizer/ban_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ban_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c710ee000231372d734483b9d28128e8359cb0283ee8585cfee7e121ff3e088c
3
- size 776699
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d9e73114a8bfb09cc05cf62d77156539332a4b993d735d59704b21d2055bd5b
3
+ size 785247
models/tokenizer/ban_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ban_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7f6893ba9c5195f098265006c68b68d4f854f4f3aef663fccea594bd3ecddbb
3
- size 1338022
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c4163cfc92fedee34979f21e46b3960651c5a1aea63e481f06e935586c4982c
3
+ size 1355653
models/tokenizer/ban_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ban_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e665b6cb1dccbf5b7fa71b5c8e3e9ea156603fcf9f94aeeaa2d783a6631da368
3
- size 371066
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79dc41677c1fda74a9243a5e8e6b256d3a8d6e89e202f72fe68235e2c50ce0df
3
+ size 372914
models/tokenizer/ban_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ban_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:33bac4c04f637c4a7c7b840996ea84729668359a0bba018a9c721c27ccd525bc
3
- size 1847029
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e4e8f2a999ef9ad888861be852297181f3f88c4f902b2c2d3a001b3f88a5eeb
3
+ size 1669252
models/vocabulary/ban_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "ban",
3
- "vocabulary_size": 109825,
 
4
  "statistics": {
5
- "type_token_ratio": 0.08234307722115898,
6
  "coverage": {
7
- "top_100": 0.4083867316599922,
8
- "top_1000": 0.6399472916582452,
9
- "top_5000": 0.7739293501921968,
10
- "top_10000": 0.8207767696718877
11
  },
12
- "hapax_count": 244616,
13
- "hapax_ratio": 0.690145891699888,
14
- "total_documents": 34560
15
  }
16
  }
 
1
  {
2
  "language": "ban",
3
+ "vocabulary_size": 96177,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.06865216883420294,
7
  "coverage": {
8
+ "top_100": 0.4271482296290528,
9
+ "top_1000": 0.6597757616661908,
10
+ "top_5000": 0.7939794084053682,
11
+ "top_10000": 0.8415540715935934
12
  },
13
+ "hapax_count": 157713,
14
+ "hapax_ratio": 0.6211863405411793,
15
+ "total_documents": 32919
16
  }
17
  }
models/word_markov/ban_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:490f92d81b31f52d83fd0f2cd5b002bd546e2be9ec34f436bbd4b503863a38fd
3
- size 14573406
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b32e21376297d35ef6bbd5d4573db01b79bce03f68060112590a5eccf22d3f2
3
+ size 12122512
models/word_markov/ban_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_contexts": 354734,
6
- "total_transitions": 5300849
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_contexts": 253714,
6
+ "total_transitions": 3665289
7
  }
models/word_markov/ban_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7933f21fe3dc7737031daf59c6f0b75d5f0ec0760ea5fbe33e643b28f9377775
3
- size 30285330
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0cc40598890a3afe8525cbf4abca4b849a1276159d24aed089a152527e3bc5a
3
+ size 26358283
models/word_markov/ban_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_contexts": 1504653,
6
- "total_transitions": 5266289
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_contexts": 1298868,
6
+ "total_transitions": 3632370
7
  }
models/word_markov/ban_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b7d248fbf95bb7c4c07d3ad6a40e7491d998158dd768f5df49fb1a1fb786ad4d
3
- size 44547695
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb834be2557bcc40bfaf6bbd8ef0594de27818bffcc89cc2b4522ed98fb5eb69
3
+ size 35570394
models/word_markov/ban_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_contexts": 2520521,
6
- "total_transitions": 5231730
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_contexts": 1983308,
6
+ "total_transitions": 3599451
7
  }
models/word_markov/ban_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:af85b8acafe95f6c6598d6c3d9f95ab7ddbfe52fec36eb64d4e73ed213f6ba54
3
- size 53030226
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5519a7b2bd7e347aaad32b2cb5de479a42d7d09c0c4a8a797a857d5a6c9d26d
3
+ size 40846702
models/word_markov/ban_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_contexts": 3008321,
6
- "total_transitions": 5197173
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_contexts": 2240261,
6
+ "total_transitions": 3566533
7
  }
models/word_ngram/ban_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a7e05025863931724c16c54d53b71d59185644881045f57e68714b7edff8d95
3
- size 1171748
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa43961bd9c5b0bc97c6e58a2e5768d2613949b7a4ca2a54797209038ce2a44e
3
+ size 878363
models/word_ngram/ban_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_ngrams": 86017,
6
- "total_ngrams": 5300849
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_ngrams": 59688,
6
+ "total_ngrams": 3665289
7
  }
models/word_ngram/ban_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6382c65c522ccb8d948baed426584d1da5bc0135ba2f1a160c62c3a280a15c16
3
- size 1910455
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec8256fd7645e4ba79178c17a3abd2abc970f54a61627bd98d6add5ed6abd22f
3
+ size 1209013
models/word_ngram/ban_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_ngrams": 132180,
6
- "total_ngrams": 5266289
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_ngrams": 77113,
6
+ "total_ngrams": 3632370
7
  }
models/word_ngram/ban_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4076c7de92dd17498060b87078339b4ccf91574d719a5e0d84560c92179b24b1
3
- size 3227946
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08d1fb1d14940fae2e02caf50543e44b49547bf66e935ba7c9d5f912cb36ee4f
3
+ size 1928264
models/word_ngram/ban_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ban",
5
- "unique_ngrams": 212984,
6
- "total_ngrams": 5231730
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ban",
5
+ "unique_ngrams": 116715,
6
+ "total_ngrams": 3599451
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: f0e15c7dfa5c0a3b8db5a9a8a7f70495c8c32caadda184289bdbaf24eac144c7
  • Pointer size: 131 Bytes
  • Size of remote file: 166 kB

Git LFS Details

  • SHA256: dac947887d0a4a952e949544a3c547a3bab8b1b57bf11e58ca84dcecf30f60ed
  • Pointer size: 131 Bytes
  • Size of remote file: 168 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED