Problem running GGUF
Trying to run the model with Ollama (version is 0.13.5) results in:
ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0
pulling manifest
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
llama_model_load_from_file_impl: failed to load model
Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Will you get bartowski and/or unsloth quants done as well?
Found it sorry, did something wrong with the search 😁
still this error
ollama --version
ollama version is 0.15.2