Problem running GGUF

#3
by surfiend - opened

Trying to run the model with Ollama (version is 0.13.5) results in:

ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0
pulling manifest
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
llama_model_load_from_file_impl: failed to load model

surfiend changed discussion title from GGUF Version cannot be to Issue GGUF Version
surfiend changed discussion title from Issue GGUF Version to Problem runnning GGUF
surfiend changed discussion title from Problem runnning GGUF to Problem running GGUF
Liquid AI org

Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4

Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4

Will you get bartowski and/or unsloth quants done as well?

Found it sorry, did something wrong with the search 😁

This comment has been hidden (marked as Resolved)

still this error

ollama --version
ollama version is 0.15.2

Sign up or log in to comment