Philosopher model :)

#9
by AImhotep - opened

Hello team.

Thank you very much for this model. I'm with GLM models since 4.5 and am always positively surprised. Astonishing work :)

I observed when using in Roo Code, that this model has veeeeery long thinking process. In some cases it is great, as it delivers very good answers and analysis.
In other cases though I would like to limit its philosophical nature but not completely turn off thinking.

Is there any neat way to say to it 'think but not write a book about my question?' 😁

Seed models brought thinking budgets to open source, check it. Now many other models supports thinking budgets.

You can introduce thinking budgets in sglang deployment

Would it just 'cut' thinking block or actually model will 'know' that it should keep it shorter?

I found that GLM-4.7 and later models do support thinking budget on SiliconFlow (while some older models like Kimi-K2 and DeepSeek don't)
I just wonder why no one mention that their models support thinking budget (like Qwen3) and none of these open models have ADJUSTABLE THINKING LEVEl yet (sometimes it can be quite useful)

Does this model also support retained thinking? What is the impact of it on reasoning and overall speed?

You can introduce thinking budgets in sglang deployment

@ZHANGYUXUAN-zR

** I am using llama.cpp to run GLM5.1, and it also has 'reasoning-budget' parameter where we can set number of tokens.
What is the recommended number of tokens for thinking in coding tasks with tools like RooCode?

** Also, llama.cpp has 'reasoning-budget-message' which is message injected before the end-of-thinking tag when reasoning budget is exhausted.
I have seen some people use messages like: "I have thought about this for long enough. Time to answer"
Do you have a recommended message to use?

Sign up or log in to comment