clemylia/reeci-gguf

This is a GGUF quantized version of the Clemylia/ReeCi model.

Converted using llama.cpp.

How to use

# First, clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# Then, download this GGUF file
# For example, using wget:
wget https://huggingface.co/clemylia/reeci-gguf/resolve/main/Reecifp16.gguf

# Compile llama.cpp (if you haven't already)
make

# Run the model
./main -m Reecifp16.gguf -p "Hello, my name is"

Enjoy!

Downloads last month
65
GGUF
Model size
51M params
Architecture
gpt2
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Clemylia/reeci-gguf

Finetuned
Clemylia/ReeCi
Quantized
(2)
this model