-----------------------------------------------
- Model Details and Specifications: -
-----------------------------------------------

Magistral-Small-2509-Vision (24B Parameters)

Description:
This model was re-configured using MistralAI's "Mistral3ForConditionalGeneration" processor and configuration, then saved as a Vision Multimodal version of Magistral-Small-2509 with working Vision (re-enabled).

The Chat-template and System-prompt files are also reworked and are custom compared to the originals supplied by MistralAI. No modifications, edits, or additional configuration is required to use this model with Ollama/Llama.cpp Both Vision and Text work. (^.^)

This release contains:
Llama.cpp and Ollama compatible GGUF converted and Quantized model files
(Compatible with both Ollama, and Llama.cpp)

IMPORTANT NOTICE as of (GMT-8) 02:45 November 20th 2025:
Please note: The chat-template AND the system-prompt are different from what MistralAI supplied. You can absolutey use the standard template and prompt, but I they are what I personally created, used, and found to be very good in performance and usability. The standard files released by MistralAI will be uploaded within the next few days but, this model is not only usable - it works very well with what is currently provided in this release.

Happy LLM Inferrencing,
-- Jon Z (EnlistedGhost)


---------------------------------------------------
- Conversion and GGUF Quantization: -
---------------------------------------------------

Quantized GGUF version of:

  • EnlistedGhost/Magistral-Small-2509-Vision
    (by MistralAI - modified by EnlistedGhost)

Original Model Link (Safetensors):


-------------------------------
---- Updates & News ----
-------------------------------

Model Updates (as of: November 20th, 2025)

  • Uploaded: GGUF Converted and Quantized model files
  • Created: ModelCard
    (this page)

--------------------------------------
---- How to run this Model ----
--------------------------------------

Compatible Software (Required to use this Model)
You can run this model by using either Ollama (or) Llama.cpp
(Below are instruction on running these GGUF files with Ollama)

How to run this Model using Ollama
You can run this model by using the "ollama run" command.
Simply copy & paste one of the commands from the list below into
your console, terminal or power-shell window.

Quant Type File Size Command
QX_X 0.00 GB (Currently Uploading Files, Check again very soon!)

Vision Projector (Files)
mmproj (Vision Projector) Files


-------------------------------------------------
- Legal, Citations and Usage Details: -
-------------------------------------------------

Intended Use

Same as original:

Out-of-Scope Use

Same as original:

Bias, Risks, and Limitations

Same as original:

Evaluation

  • This model has NOT been evaluated in any form, scope or method of use.
  • !!! USE AT YOUR OWN RISK !!!
  • !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!

Citation (Original Paper)

[MistalAI Magistral-Small-2509 Original Paper]

Detailed Release Information

  • Originally Developed by: [MistralAI]
  • Modified with Vision re-Enabled by: [EnlistedGhost]
  • MMPROJ (Vision) Quantized by: [EnlistedGhost]
  • Model Quantized for GGUF by: [EnlistedGhost]
  • Model type & format: [Quantized/GGUF]
  • License type: [Apache-2.0]

Model Card Authors and Contact

[EnlistedGhost]

Downloads last month
1,696
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for EnlistedGhost/Magistral-Small-2509-Vision-GGUF