News
🎉 Nanbeige4-3B-Thinking-2511 debuted at #11 on WritingBench! Despite only 3B parameters, its creative-writing ability chops rival those of hundred-billion-parameter giants.
Introduction
Nanbeige4-3B-Thinking-2511 is an enhanced iteration over our previous Nanbeige4-3B-Thinking-2510. Through advanced distillation techniques and reinforcement learning (RL) optimization, we have effectively scaled the model’s reasoning capacity, resulting in superior performance across a broad range of benchmarks. On math and science reasoning benchmarks, Nanbeige4-3B-Thinking-2511 outperforms Qwen3-4B-Thinking-2507, Qwen3-8B-Thinking-2504, and Qwen3-14B-Thinking-2504 with a significant margin. Besides, Nanbeige4-3B-Thinking-2511 achieves state-of-the-art (SOTA) results among models smaller than 32B parameters on general tasks like Arena-Hard-V2 and BFCL-V4. This marks a major milestone in delivering powerful, efficient reasoning performance at a compact scale.
- Technical Report: https://huggingface.co/Nanbeige/Nanbeige4-3B-Thinking-2511/blob/main/Nanbeige4-3B-Technical-Report.pdf
Quickstart
For the chat scenario:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
use_fast=False,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
torch_dtype='auto',
device_map='auto',
trust_remote_code=True
)
messages = [
{'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
]
prompt = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)
For the tool use scenario:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
use_fast=False,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
torch_dtype='auto',
device_map='auto',
trust_remote_code=True
)
messages = [
{'role': 'user', 'content': 'Help me check the weather in Beijing now'}
]
tools = [{'type': 'function',
'function': {'name': 'SearchWeather',
'description': 'Find out current weather in a certain place on a certain day.',
'parameters': {'type': 'dict',
'properties': {'location': {'type': 'string',
'description': 'A city in china.'},
'required': ['location']}}}}]
prompt = tokenizer.apply_chat_template(
messages,
tools,
add_generation_prompt=True,
tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)
Limitations
While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
Citation
If you find our model useful or want to use it in your projects, please kindly cite this Huggingface project.
Contact
If you have any questions, please raise an issue or contact us at [email protected].
- Downloads last month
- 797