macandchiz/Mistral-7B-Instruct-v0.3-GGUF

GGUF Logo

Quantized version of: mistralai/Mistral-7B-Instruct-v0.3

Available Files

The following GGUF quantization variants are available:

  • mistral-7b-instruct-v0.3-q2_k.gguf
  • mistral-7b-instruct-v0.3-q3_k_s.gguf
  • mistral-7b-instruct-v0.3-q3_k_m.gguf
  • mistral-7b-instruct-v0.3-q3_k_l.gguf
  • mistral-7b-instruct-v0.3-q4_0.gguf
  • mistral-7b-instruct-v0.3-q4_1.gguf
  • mistral-7b-instruct-v0.3-q4_k_s.gguf
  • mistral-7b-instruct-v0.3-q4_k_m.gguf
  • mistral-7b-instruct-v0.3-q5_0.gguf
  • mistral-7b-instruct-v0.3-q5_1.gguf
  • mistral-7b-instruct-v0.3-q5_k_s.gguf
  • mistral-7b-instruct-v0.3-q5_k_m.gguf
  • mistral-7b-instruct-v0.3-q6_k.gguf
  • mistral-7b-instruct-v0.3-q8_0.gguf
  • mistral-7b-instruct-v0.3-f16.gguf

Quantization Information

  • q2_k: Smallest size, lowest quality
  • q3_k_s, q3_k_m, q3_k_l: Small size, low quality variants
  • q4_0, q4_1, q4_k_s, q4_k_m: Medium size, good quality (recommended for most use cases)
  • q5_0, q5_1, q5_k_s, q5_k_m: Larger size, better quality
  • q6_k: Large size, high quality
  • q8_0: Very large size, very high quality
  • f16: Original precision (largest size)

Choose the quantization level that best fits your needs based on the trade-off between file size and model quality.

Downloads last month
180
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for macandchiz/Mistral-7B-Instruct-v0.3-GGUF

Quantized
(205)
this model