GGUFs of Llama-3-8B-16K
GGUF conversion and quantization of https://huggingface.co/mattshumer/Llama-3-8B-16K
Done with Maxime Labonne's AutoGGUF
Orginal model card
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.
rope_theta was set to 1000000.0. Trained with Axolotl.
- Downloads last month
- 99
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit