🔍Overview |
📦Installation |
🚀Quick Start |
⚙️Usage |
🤝Contributing |
📖Docs |
SREGym is inspired by our prior work on AIOpsLab and ITBench. It is architectured with AI-native usability and extensibility as first-class principles. The SREGym benchmark suites contain 86 different SRE problems. It supports all the problems from AIOpsLab and ITBench, and includes new problems such as OS-level faults, metastable failures, and concurrent failures. See our problem set for a complete list of problems.
- MCP Inspector to test MCP tools.
- k9s to observe the cluster.
git clone --recurse-submodules https://github.com/SREGym/SREGym
cd SREGym
uv sync
uv run prek installChoose either a) or b) to set up your cluster and then proceed to the next steps.
SREGym supports any kubernetes cluster that your kubectl context is set to, whether it's a cluster from a cloud provider or one you build yourself.
We have an Ansible playbook to setup clusters on providers like CloudLab and our own machines. Follow this README to set up your own cluster.
SREGym can be run on an emulated cluster using kind on your local machine. However, not all problems are supported.
Note: If you run into pod crashes or "too many open files" errors, see the kind README for required host kernel settings and troubleshooting.
# For x86 machines
kind create cluster --config kind/kind-config-x86.yaml
# For ARM machines
kind create cluster --config kind/kind-config-arm.yamlTo get started with the included Stratus agent:
- Set your LLM API keys in the environment (required for your chosen model provider):
# OpenAI
export OPENAI_API_KEY="sk-proj-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Google
export GEMINI_API_KEY="..."
# AWS Bedrock
export AWS_PROFILE="bedrock"
export AWS_DEFAULT_REGION="us-east-2"- Run the benchmark:
python main.py --agent stratus --model gpt-5Use --judge-model to override the judge model separately (defaults to --model):
python main.py --agent stratus --model gpt-5 --judge-model claude-sonnet-4Agents always run in isolated Docker containers, preventing access to SREGym internals like problem definitions and grading logic. The image is built automatically on first run.
Use --force-build to rebuild the container image after updating dependencies or agent code:
python main.py --agent codex --model gpt-5 --force-buildSREGym uses two model roles, both configurable via CLI:
| CLI Flag | Default | Purpose |
|---|---|---|
--model |
gpt-5 |
Sets both agent and judge model |
--judge-model |
(same as --model) |
Override just the judge evaluator model |
Make sure the required environment variables for your chosen provider are set before running. See the table below.
| Model ID | Provider | Model Name | Required Environment Variables |
|---|---|---|---|
gpt-5 |
OpenAI | GPT-5 | OPENAI_API_KEY |
gemini-2.5-pro |
Gemini 2.5 Pro | GEMINI_API_KEY |
|
claude-sonnet-4 |
Anthropic | Claude Sonnet 4 | ANTHROPIC_API_KEY |
bedrock-claude-sonnet-4.5 |
AWS Bedrock | Claude Sonnet 4.5 | AWS_PROFILE, AWS_DEFAULT_REGION |
Provider Examples
OpenAI:
python main.py --agent stratus --model gpt-5Anthropic:
python main.py --agent stratus --model claude-sonnet-4AWS Bedrock:
python main.py --agent stratus --model bedrock-claude-sonnet-4.5Note: For AWS Bedrock, ensure your AWS credentials are configured via ~/.aws/credentials and your profile has permissions to access Bedrock.
This project is generously supported by a Slingshot grant from the Laude Institute.
laude-interview.mp4
Licensed under the MIT license.
