Skip to content

sys-intelligence/SREGym

 
 

Repository files navigation

SREGym: A Benchmarking Platform for SRE Agents

🔍Overview | 📦Installation | 🚀Quick Start | ⚙️Usage | 🤝Contributing | 📖Docs | Slack

🔍 Overview

SREGym is an AI-native platform to enable the design, development, and evaluation of AI agents for Site Reliability Engineering (SRE). The core idea is to create live system environments for SRE agents to solve real-world SRE problems. SREGym provides a comprehensive SRE benchmark suite with a wide variety of problems for evaluating SRE agents and also for training next-generation AI agents.

SREGym Overview

SREGym is inspired by our prior work on AIOpsLab and ITBench. It is architectured with AI-native usability and extensibility as first-class principles. The SREGym benchmark suites contain 86 different SRE problems. It supports all the problems from AIOpsLab and ITBench, and includes new problems such as OS-level faults, metastable failures, and concurrent failures. See our problem set for a complete list of problems.

📦 Installation

Requirements

Recommendations

git clone --recurse-submodules https://github.com/SREGym/SREGym
cd SREGym
uv sync
uv run pre-commit install

🚀 Quickstart

Setup your cluster

Choose either a) or b) to set up your cluster and then proceed to the next steps.

a) Kubernetes Cluster (Recommended)

SREGym supports any kubernetes cluster that your kubectl context is set to, whether it's a cluster from a cloud provider or one you build yourself.

We have an Ansible playbook to setup clusters on providers like CloudLab and our own machines. Follow this README to set up your own cluster.

b) Emulated cluster

SREGym can be run on an emulated cluster using kind on your local machine. However, not all problems are supported.

# For x86 machines
kind create cluster --config kind/kind-config-x86.yaml

# For ARM machines
kind create cluster --config kind/kind-config-arm.yaml

⚙️ Usage

Running an Agent

Quick Start

To get started with the included Stratus agent:

  1. Create your .env file:
mv .env.example .env
  1. Open the .env file and configure your model and API key.

  2. Run the benchmark:

python main.py

Monitoring with Dashboard

SREGym provides a dashboard to monitor the status of your evaluation. The dashboard runs automatically when you start the benchmark with python main.py and can be accessed at http://localhost:11451 in your web browser.

Acknowledgements

This project is generously supported by a Slingshot grant from the Laude Institute.

License

Licensed under the MIT license.

About

An AI-Native Platform for Benchmarking SRE Agents

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.9%
  • Mustache 2.2%
  • Shell 1.5%
  • HCL 0.7%
  • Smarty 0.5%
  • Makefile 0.2%