project | SFT dataset | RL dataset | SFT model | RL model

OpenThinker-Agent-v1

OpenThoughts-Agent is an open-source effort to curate the best datasets for training agents. Our first release includes datasets, models and our research codebase.

OpenThinker-Agent-v1 is a model trained for agentic tasks such as Terminal-Bench 2.0 and SWE-Bench.

The OpenThinker-Agent-v1 model is post-trained from Qwen/Qwen3-8B. It is SFT-ed on the OpenThoughts-Agent-v1-SFT dataset, then RL-ed on the OpenThoughts-Agent-v1-RL dataset.

This model is the final model after both SFT and RL. For the model after the SFT stage only, see OpenThinker-Agent-v1-SFT.

OpenThinker-Agent-v1 Model Performance

Our OpenThinker-Agent-v1 model is the state-of-the-art model at its scale on agent benchmarks.

Model Harness Terminal-Bench 2.0 SWE-Bench Verified OpenThoughts-TB-Dev
Qwen3-8B Terminus-2 0.0 0.7 5.7
OpenThinker-Agent-v1 Terminus-2 4.9 15.7 17.3
Qwen3-32B Terminus-2 1.9 5.7 10.2
Qwen/Qwen3-Coder-30B-A3B-Instruct OpenHands 10.1 49.2 24.5

Data

We built OpenThinker-Agent-v1 in two stages: supervised fine-tuning, followed by reinforcement learning. Each stage required its own data pipeline – RL tasks (instructions, environments, and verifiers) and SFT traces from strong teacher agents completing tasks.

OpenThoughts-Agent-v1-SFT is an SFT trace dataset containing approximately 15,200 traces drawn from two different data sources we curate:

  • nl2bash: Simple synthetically generated tasks where the agent has to format shell commands effectively
  • InferredBugs: A set of bugs in C# and Java collected by Microsoft that we turned into tasks

OpenThoughts-Agent-v1-RL is an RL dataset containing ~720 tasks drawn from the nl2bash verified dataset.

To stabilize training, we built a three-stage filtration pipeline that prunes tasks before they ever hit the learner:

  1. Bad verifiers filter: drop tasks with flaky or excessively slow verifiers.
  2. Environment stability: remove tasks whose containers take too long to build or tear down. Optional difficulty filter: discard tasks that even a strong model (GPT-5 Codex) cannot solve in a single pass.

Links

Citation

@misc{openthoughts-agent,
  author = {Team, OpenThoughts-Agent},
  month = Dec,
  title = {{OpenThoughts-Agent}},
  howpublished = {https://open-thoughts.ai/agent},
  year = {2025}
}
Downloads last month
-
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for open-thoughts/OpenThinker-Agent-v1

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(614)
this model

Collection including open-thoughts/OpenThinker-Agent-v1