Model Card for LoopTool-8B

Model Details

Model Description

The LoopTool-8B model is derived from iterative fine-tuning of Qwen3-8B, with a particular emphasis on enhancing the model’s capabilities in tool invocation.

Model Sources

Model Performance

The Main Result in BFCL-v3

Overall Non-Live Live Multi-Turn
Qwen3-8B 66.34 88.81 78.54 33.00
LoopTool-8B 74.93 89.52 84.72 50.88
Qwen3-32B 69.25 88.90 77.83 43.12
LoopTool-32B 79.32 91.83 88.58 57.75

The Main Result in ACEBench (English)

Overall Normal Special Agent
Qwen3-8B 67.1 70.9 78.0 34.2
LoopTool-8B 73.4 78.0 80.7 43.3
Qwen3-32B 72.2 77.3 76.0 46.7
Kimi-K2-0711 77.4 78.9 81.3 65.0
LoopTool-32B (OpenSource-1st) 77.5 80.5 78.7 64.1

Citation

If you find our work helpful, feel free to give us a cite.

@misc{zhang2025looptool,
      title={LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls}, 
      author={Kangning Zhang and Wenxiang Jiao and Kounianhua Du and Yuan Lu and Weiwen Liu and Weinan Zhang and Lei Zhang and Yong Yu},
      year={2025},
      eprint={2511.09148},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2511.09148}, 
}
Downloads last month
46
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zhuiguang-ning/LoopTool-8B

Quantizations
2 models