Instructions to use zai-org/GLM-5.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zai-org/GLM-5.1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="zai-org/GLM-5.1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("zai-org/GLM-5.1") model = AutoModelForCausalLM.from_pretrained("zai-org/GLM-5.1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use zai-org/GLM-5.1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "zai-org/GLM-5.1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-5.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/zai-org/GLM-5.1
- SGLang
How to use zai-org/GLM-5.1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "zai-org/GLM-5.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-5.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "zai-org/GLM-5.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-5.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use zai-org/GLM-5.1 with Docker Model Runner:
docker model run hf.co/zai-org/GLM-5.1
Philosopher model :)
Hello team.
Thank you very much for this model. I'm with GLM models since 4.5 and am always positively surprised. Astonishing work :)
I observed when using in Roo Code, that this model has veeeeery long thinking process. In some cases it is great, as it delivers very good answers and analysis.
In other cases though I would like to limit its philosophical nature but not completely turn off thinking.
Is there any neat way to say to it 'think but not write a book about my question?' 😁
Seed models brought thinking budgets to open source, check it. Now many other models supports thinking budgets.
You can introduce thinking budgets in sglang deployment
Would it just 'cut' thinking block or actually model will 'know' that it should keep it shorter?
I found that GLM-4.7 and later models do support thinking budget on SiliconFlow (while some older models like Kimi-K2 and DeepSeek don't)
I just wonder why no one mention that their models support thinking budget (like Qwen3) and none of these open models have ADJUSTABLE THINKING LEVEl yet (sometimes it can be quite useful)
Does this model also support retained thinking? What is the impact of it on reasoning and overall speed?
You can introduce thinking budgets in sglang deployment
** I am using llama.cpp to run GLM5.1, and it also has 'reasoning-budget' parameter where we can set number of tokens.
What is the recommended number of tokens for thinking in coding tasks with tools like RooCode?
** Also, llama.cpp has 'reasoning-budget-message' which is message injected before the end-of-thinking tag when reasoning budget is exhausted.
I have seen some people use messages like: "I have thought about this for long enough. Time to answer"
Do you have a recommended message to use?