Moxin-GGUF
Collection
Moxin's llama.cpp Quant of LLMs
โข
7 items
โข
Updated
โข
2
We sincerely thank the open-source community developers and contributors unsloth for providing BF16 version and imatrix file.
We really appreciate the attention and weโre also happy to share additional quantization variants for everyone to try out and experiment with โ hope you enjoy them!
For llama.cpp, please use --jinja
- Q4_K_XL : 204.34 GiB (4.92 BPW)
- Other Quant Versions (Coming soon)
huggingface-cli download moxin-org/GLM-4.6-GGUF --include "*Q4_K_XL*" --local-dir ./GLM-4.6-GGUF
# !pip install huggingface_hub hf_transfer
import os
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "moxin-org/GLM-4.6-GGUF",
local_dir = "GLM-4.6-GGUF",
allow_patterns = ["*Q4_K_XL*"],
)
Download Available for huggingface_hub, huggingface-cli, snapshot_download, xet.
Example of runing gguf with local build of llama.cpp. (llama-cli/llama-server)
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
# -DLLAMA_CURL=OFF if error
cmake -B build -DGGML_CUDA=ON -DBUILD_SHARED_LIBS=OFF
cmake --build build --config Release -j --clean-first
build/bin/llama-cli -m GLM-4.6-GGUF/Moxin-Q4_K_XL/GLM-4.6-Q4_K_XL-00001-of-00009.gguf \
-ngl 99 \
--jinja \
--temp 1.0 \
--top-k 40 \
--top-p 0.95 \
--min-p 0.01 \
--ctx-size 16384 \ # 4096, 8192
If this work is helpful, please kindly helpe cite as:
@article{chen2025collaborative,
title={Collaborative Compression for Large-Scale MoE Deployment on Edge},
author={Chen, Yixiao and Xie, Yanyue and Yang, Ruining and Jiang, Wei and Wang, Wei and He, Yong and Chen, Yue and Zhao, Pu and Wang, Yanzhi},
journal={arXiv preprint arXiv:2509.25689},
year={2025}
}
This repository builds upon the outstanding work of the following open-source authors and projects:
We sincerely thank them for their excellent contributions to the open-source community.
4-bit
Base model
zai-org/GLM-4.6