Ad 728 × 90

Breaking News

random

Show HN: Terminal-Bench-RL: Training Long-Horizon Terminal Agents with RL https://ift.tt/CbtXSgW

Show HN: Terminal-Bench-RL: Training Long-Horizon Terminal Agents with RL After training calculator agent via RL, I really wanted to go bigger! So I built RL infrastructure for training long-horizon terminal/coding agents that scales from 2x A100s to 32x H100s (~$1M worth of compute!) Without any training, my 32B agent hit #19 on Terminal-Bench leaderboard, beating Stanford's Terminus-Qwen3-235B-A22! With training... well, too expensive, but I bet the results would be good! *What I did*: - Created a Claude Code-inspired agent (system msg + tools) - Built Docker-isolated GRPO training where each rollout gets its own container - Developed a multi-agent synthetic data pipeline to generate & validate training data with Opus-4 - Implemented a hybrid reward signal of unit test verifiers & a behavioural LLM judge. *Key results*: - My untrained Qwen3-32B agent achieved 13.75% on Terminal-Bench (#19, beats Stanford's Qwen3-235B MoE) - I tested training to work stably on 32x H100s distributed across 4 bare metal nodes - I created a mini-eval framework for LLM-judge performance. Sonnet-4 won. - ~£30-50k needed for full training run of 1000 epochs (I could only afford testing ) *Technical details*: - The synthetic dataset ranges from easy to extremely hard tasks. An example hard task's prompt: "I found this mystery program at `/app/program` and I'm completely stumped. It's a stripped binary, so I have no idea what it does or how to run it properly. The program seems to expect some specific input and then produces an output, but I can't figure out what kind of input it needs. Could you help me figure out what this program requires?" - Simple config presets allow training to run on multiple hardware setups with minimal effort. - GRPO used with 16 rollouts per task, up to 32k tokens per rollout. - Agent uses XML/YAML format to structure tool calls *More details*: My Github repos open source it all (agent, data, code) and has way more technical details if you are interested!: - Terminal Agent RL repo - Multi-agent synthetic data pipeline repo I thought I would share this because I believe long-horizon RL is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible. Thanks for reading! Dan (Built using rLLM RL framework which was brilliant to work with, and evaluated and inspired by the great Terminal Bench benchmark) https://ift.tt/GsBbCqz July 29, 2025 at 04:12AM
Show HN: Terminal-Bench-RL: Training Long-Horizon Terminal Agents with RL https://ift.tt/CbtXSgW Reviewed by Technology World News on July 29, 2025 Rating: 5

No comments:

Contact Form

Name

Email *

Message *

Powered by Blogger.