Qwen3-30B-A3B-Instruct-2507 GGUF (ShapeLearn Quantized)

This is a GGUF-quantized version of Qwen3-30B-A3B-Instruct-2507 produced with ByteShape's ShapeLearn, which learns the optimal datatype per tensor to maintain high quality even at very low bitlengths.

To learn more about ShapeLearn and to see detailed benchmarks across GPUs, CPUs, and even the Raspberry Pi, please visit our blog.

If you have questions or want to share feedback, reach us on Reddit.

How to Pick a Model

We provide CPU and GPU optimized variants for llama.cpp:

  • CPUs: Models labeled as KQ, optimized for CPU inference with predominantly KQ quantization.
  • GPUs: Models labeled as IQ, optimized for GPU inference with a hybrid approach combining KQ and IQ quantization for better throughput.

Each hardware target includes a range of models covering different size and quality tradeoffs.

The charts below show quality vs tokens per second for each device, comparing ShapeLearn models with Unsloth or MagicQuant baselines.

Selection rule: Choose the model with the highest quality at your target throughput or the fastest model that still meets your required quality.

CPU Models

CPU Benchmark - Intel

Table sorted by model size (match the chart numbers to model IDs):

Model ID Bits/Weight Model Size Normalized Quality
KQ-1 2.66 10.2 GB 92.84%
KQ-2 2.70 10.3 GB 94.18%
KQ-3 2.85 10.9 GB 95.49%
KQ-4 3.18 12.1 GB 96.97%
KQ-5 3.25 12.4 GB 97.97%
KQ-6 3.61 13.8 GB 98.75%
KQ-7 3.92 14.9 GB 98.86%
KQ-8 4.41 16.9 GB 99.34%
KQ-9 4.67 17.8 GB 99.75%

GPU Models

GPU Benchmark - RTX 5090

Table sorted by model size (match the chart numbers to model IDs):

Model ID Bits/Weight Model Size Normalized Score
IQ-1 2.69 10.3 GB 94.24%
IQ-2 2.75 10.5 GB 95.48%
IQ-3 3.02 11.5 GB 95.83%
IQ-4 3.29 12.5 GB 97.35%
IQ-5 3.63 13.9 GB 97.74%
IQ-6 3.87 14.8 GB 98.66%
IQ-7 4.41 16.9 GB 99.34%
IQ-8 4.67 17.8 GB 99.75%

Notes on quantization labels

The labels you see (for example IQ4_XS) are only there to make Hugging Face show our models in the GGUF table. We do not use the conventional quantization profiles as defined in llama.cpp. In our case, these labels indicate the primary quantization approach and average bit length. Note that both KQ and IQ models may use a mix of quantization techniques optimized for their target hardware, which is why several models can share the same tag.

Running these models with Ollama

All GGUF files in this repo can be used directly with Ollama.

To run a model with Ollama, use:

ollama run hf.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF:FILE_NAME.gguf

Replace FILE_NAME.gguf with the GGUF filename you want. For example:

ollama run hf.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF:Qwen3-30B-A3B-Instruct-2507-IQ4_XS-3.63bpw.gguf
Downloads last month
6,702
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF

Quantized
(118)
this model