Weiver-Q1 is a compact and efficient variant of Qwen3-4B, designed for general coding and reasoning tasks under limited computational resources. The model is trained using the OpenCodeReasoning dataset, which contains high-quality synthetic data distilled from DeepSeek-R1, covering a broad range of programming and logical reasoning scenarios. Fine-tuning was performed using QLoRA, enabling a significant reduction in memory footprint while maintaining strong performance.

This release is provided as a GGUF INT4 (Q4_K_M) quantized model, making it well-suited for local inference, edge devices, and low-latency applications. The model is released under the Apache 2.0 license, allowing commercial and non-commercial use.

Downloads last month
19
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support