A benchmark suite for evaluating LLM agents on game development tasks.
Paper: GameDevBench: A Comprehensive Benchmark for Game Development
GameDevBench contains 132 game development tasks to evaluate LLM agents' ability to complete game development problems in the Godot game engine.
-
Godot 4.x - Download and install from godotengine.org
- Ensure
godotis available in your PATH, or setGODOT_EXEC_PATHenvironment variable
- Ensure
-
Python 3.10+ - Required for all agents
- Python 3.12+ - Required for OpenHands agent
Install the agent(s) you want to use:
- Claude Code - Claude Code
- Codex - Codex
- Gemini CLI - Gemini CLI
- OpenHands - OpenHands
Before running the benchmark, unzip the tasks:
bash unzip_tasks.shThis will unzip all individual task archives from tasks/ and tasks_gt/ in place.
Note: Tasks are distributed as individual zip files to prevent accidental data leakage.
You can use the built-in plans for claude-code, codex, and gemini-cli, or provide API keys directly. For OpenHands you must provide your own API keys. See .env.example for a complete list of optional environment variables.
uv run python gamedevbench/src/benchmark_runner.py \
--agent AGENT \
--model MODEL \
run --task-list tasks.yamlclaude-code- Anthropic's Claude Code CLIcodex- OpenAI Codexgemini-cli- Google Gemini CLIopenhands- OpenHands (requires Python 3.12+)
--agent AGENT- Agent to use (required)--model MODEL- Model name (e.g.,claude-sonnet-4.5-20250929)--enable-mcp- Enable MCP (Model Context Protocol) server for supported agents- Provides screenshot capabilities to the agent
- Note: MCP server requires macOS (see limitations below)
--use-runtime-video- Enable runtime video mode- Appends Godot runtime instructions to prompts
- Helps agents understand how to run and test their changes
--skip-display- Skip tasks that require displayrun --task-list FILE- Run tasks from YAML file (e.g.,tasks.yaml)
macOS-only Features:
- MCP server screenshot functionality (
--enable-mcp) currently only works on macOS- Uses AppleScript for display capture
- Requires setting
GODOT_SCREENSHOT_DISPLAYenvironment variable to correct display number
Benchmark results are saved to results/ directory with the following information:
- Task success/failure status
- Token usage and costs
- Execution time
- Validation results
@misc{chi2026gamedevbenchevaluatingagenticcapabilities,
title={GameDevBench: Evaluating Agentic Capabilities Through Game Development},
author={Wayne Chi and Yixiong Fang and Arnav Yayavaram and Siddharth Yayavaram and Seth Karten and Qiuhong Anna Wei and Runkun Chen and Alexander Wang and Valerie Chen and Ameet Talwalkar and Chris Donahue},
year={2026},
eprint={2602.11103},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2602.11103},
}