Sync Claude Code configs from GitHub — for teams, communities, or across your personal projects. Run evals to verify your guidelines actually work.
For teams: Your engineering standards live in one GitHub repo. Everyone syncs from it. When standards change, one PR updates the whole team. Run evals to verify Claude actually follows your guidelines.
For individuals: Browse community configs to jumpstart your setup, or keep your personal config in a repo and sync it across all your machines.
For everyone: Layer your personal preferences on top of any base config. Your style, their standards.
# Install
brew tap HartBrook/tap
brew install staghorn
# Set up
stag initChoose how you want to get started:
- Browse public configs — Install community-shared configs from GitHub
- Connect to a team repo — Sync your team's private standards
- Start fresh — Just use the built-in starter commands
Then run stag sync periodically to stay up to date.
Test that your CLAUDE.md config actually produces the behavior you want. Evals live in your source repo alongside your config, so they stay in sync with your guidelines.
Prerequisites:
# Install Promptfoo (evals run on Promptfoo under the hood)
npm install -g promptfoo
# Set your Anthropic API key
export ANTHROPIC_API_KEY=sk-ant-...Note: Running evals makes real API calls to Claude and will consume credits. Each test case is one API call.
# Run evals from your source repo
stag eval
# Run specific evals
stag eval security-secrets
stag eval --tag security
# Run a specific test within an eval
stag eval lang-python --test uses-type-hints
# Run tests matching a prefix pattern
stag eval --test "uses-*"
# Preview what would run (no API calls)
stag eval --dry-runEvals use Promptfoo under the hood. Perfect for CI/CD:
# JSON output for CI
stag eval --output json
# GitHub Actions annotations
stag eval --output github
# Test specific config layers
stag eval --layer teamStaghorn includes 25 starter evals you can install as a starting point:
stag eval init # Install to personal evals
stag eval init --project # Install to project evals| Category | Evals |
|---|---|
| Security | Secrets detection, injection prevention, auth patterns, OWASP Top 10, input validation |
| Code Quality | Clarity, simplicity, naming, error handling |
| Code Review | Bug detection, test coverage, performance, maintainability |
| Documentation | API docs, code comments |
| Git | Commit messages, sensitive file handling |
| Language | Python, Go, TypeScript, Rust best practices |
| Baseline | Helpfulness, focus, honesty, minimal responses |
See Creating Evals for writing custom evals, or the Evals Guide for in-depth debugging and best practices.
# Search for public configs
stag search
# Filter by language or topic
stag search --lang python
stag search --tag security
# Install directly if you know the repo
stag init --from acme/claude-standardsPublic configs are GitHub repos with the staghorn-config topic. Find configs tailored for Python, Go, React, security-focused development, and more.
stag init
# Choose option 2: "Connect to a private repository"
# Enter your team's repo URLYour team admin sets up a standards repo (see For Config Publishers below), and everyone syncs from it. Authentication via gh auth login or STAGHORN_GITHUB_TOKEN.
Staghorn pulls configs from GitHub, merges them with your personal preferences, and writes the result to where Claude Code expects it:
Team/community config (GitHub) ─┐
├─► ~/.claude/CLAUDE.md
Your personal additions ─┘
Project config (.staghorn/) ─► ./CLAUDE.md
The layering means you get shared standards plus your personal style. You never edit the output files directly — Staghorn manages them.
Advanced: You can pull different parts of your config from different sources — team standards for your base config, community best practices for specific languages. See Multi-Source Configuration.
| Command | Description |
|---|---|
stag init |
Set up staghorn (browse configs or connect repo) |
stag sync |
Fetch latest config from GitHub and apply |
stag search |
Search for community configs |
stag edit |
Edit personal config (auto-applies on save) |
stag edit -l <lang> |
Edit personal language config (e.g., -l python) |
stag info |
Show current config state |
stag optimize |
Compress config to reduce token usage |
stag languages |
Show detected and configured languages |
stag commands |
List available commands |
stag run <command> |
Run a command (outputs prompt to stdout) |
stag eval |
Run behavioral evals against your config |
stag eval init |
Install starter evals |
stag eval list |
List available evals |
stag eval validate |
Validate eval definitions without running |
stag eval create |
Create a new eval from a template |
stag project |
Manage project-level config |
stag team |
Bootstrap or validate a team standards repo |
stag version |
Print version number |
# Update config (do this periodically)
stag sync
# Check current state
stag info
# Add personal preferences (auto-applies)
stag editLarge configs consume more tokens in Claude's context window. If stag info shows your config exceeds 3,000 tokens, consider optimizing:
# Analyze merged config (informational, no changes)
stag optimize
# Show before/after diff
stag optimize --diff
# Optimize and save personal config
stag optimize --layer personal --apply
# Optimize and save team config
stag optimize --layer team --apply
# Optimize merged config and apply back to all source layers
stag optimize --apply
# Fast mode: deterministic cleanup only (no API calls)
stag optimize --deterministic --layer personal --applyThe optimizer:
- Pre-processes — Removes duplicate rules, collapses whitespace, strips verbose phrases
- Uses Claude — Intelligently compresses content while preserving meaning (unless
--deterministic) - Validates — Ensures critical content is preserved (see Anchor Validation below)
Note: CLAUDE.md files are managed by staghorn and regenerated on each sync. To persist optimizations, use --apply to save changes. When using --layer merged --apply (or just --apply), the optimized content is split by provenance markers and written back to each source layer (team, personal).
After optimization, staghorn validates that "critical content" wasn't removed. Anchors are categorized by strictness:
| Anchor Type | Examples | Strictness | Behavior |
|---|---|---|---|
| File paths | ~/.config/app.yaml, ./src/main.go |
Strict | Missing = error |
| Commands | go test ./..., npm run build |
Strict | Missing = error |
| Function/class names | ProcessPayment, UserService |
Strict | Missing = error |
| Tool names | pytest, ruff, golangci-lint |
Soft | Missing = warning |
Strict anchors (file paths, commands, function names) are project-specific and must be preserved exactly. If missing, optimization fails unless --force is used.
Soft anchors (tool names) may be consolidated or rephrased by the LLM. Missing tool names generate informational messages but don't fail validation. This allows the optimizer to combine rules like "use black, isort, ruff" without triggering errors.
What's NOT extracted (to avoid false positives):
Generic variable names commonly used in code examples are filtered out because they're illustrative, not project-specific. This includes: config, data, result, user, value, input, output, err, ctx, req, res, opts, args, params, and common single-letter variables (i, j, x, y, etc.).
This means if a code example uses const config = {...} and the optimizer restructures it to const settings = {...}, validation won't fail — these are interchangeable example names, not critical identifiers.
If validation fails unexpectedly, use --force to apply anyway or --deterministic for safer optimization.
Your personal additions layer on top of source configs:
# Open your personal config in $EDITOR (auto-applies on save)
stag editThis opens ~/.config/staghorn/personal.md. Add whatever you like:
## My Preferences
- I prefer concise responses unless I ask for detail
- Always use TypeScript strict mode
- Explain your reasoning before showing codeSet preferences for specific languages that only apply when detected in a project:
stag edit --language python
stag edit -l goThis creates/edits ~/.config/staghorn/languages/<lang>.md.
Optionally manage project-level ./CLAUDE.md files:
stag project init # Initialize
stag project init --template=backend-service # From template
stag project edit # EditThe source file is .staghorn/project.md — both it and ./CLAUDE.md should be committed.
Commands are reusable prompts for common workflows. Staghorn includes 10 starter commands:
| Command | Description |
|---|---|
code-review |
Thorough code review with checklist |
security-audit |
Scan for vulnerabilities |
pr-prep |
Prepare PR description |
explain |
Explain code in plain English |
refactor |
Suggest refactoring improvements |
test-gen |
Generate unit tests |
debug |
Help diagnose a bug |
doc-gen |
Generate documentation |
migrate |
Help migrate code |
api-design |
Design API interfaces |
# List available commands
stag commands
# Run a command
stag run security-audit
# Run with arguments
stag run code-review --focus=securityInstall commands as Claude Code slash commands:
stag commands init --claude # Install to ~/.claude/commands/Everything below is for people creating configs to share — whether for a team or the community.
Use team init to bootstrap a new standards repository:
mkdir my-team-standards && cd my-team-standards
git init
stag team initThis creates:
- A starter
CLAUDE.mdwith common guidelines - Optional commands, language configs, and project templates
- A README explaining the repo structure
.staghorn/source.yaml— marks this as a source repo (see below)
Push to GitHub and share the URL with your team.
When you're inside a team/community standards repository (marked by .staghorn/source.yaml), staghorn commands operate directly on local files instead of cached copies:
| Command | Normal Behavior | In Source Repo |
|---|---|---|
stag edit team |
Error (read-only) | Opens ./CLAUDE.md in editor |
stag optimize --layer team |
Reads/writes cache | Reads/writes ./CLAUDE.md |
stag info --layer team |
Reads from cache | Reads from ./CLAUDE.md |
stag eval --layer team |
Tests cached content | Tests ./CLAUDE.md |
This enables a natural workflow for maintaining team standards:
cd my-team-standards
# Edit the team config directly
stag edit team
# Run evals against your changes
stag eval --layer team
# Optimize the config
stag optimize --layer team --apply
# Commit and push
git add . && git commit -m "Update standards"
git pushThe source repo marker is created automatically by team init. For existing repos, run team init again or create .staghorn/source.yaml manually:
# .staghorn/source.yaml
source_repo: trueFor your config to appear in stag search, add GitHub topics to your repository:
Required:
staghorn-config— Makes your repo discoverable viastag search
Language topics (for --lang filtering):
- Add topics like
python,go,typescript,rust,java,ruby - Users can search with aliases:
golang→go,py→python,ts→typescript
Custom tags (for --tag filtering):
- Add any topics you want:
security,web,ai,backend, etc.
Example: A Python security-focused config should have topics:
staghorn-config, python, security
Then users can find it with:
stag search --lang python --tag security
stag search --lang py # aliases work tooyour-org/claude-standards/
├── .staghorn/
│ └── source.yaml # Source repo marker (created by team init)
├── CLAUDE.md # Guidelines (required)
├── commands/ # Reusable prompts (optional)
│ ├── security-audit.md
│ └── code-review.md
├── languages/ # Language-specific configs (optional)
│ ├── python.md
│ └── go.md
├── evals/ # Behavioral tests (optional)
│ ├── security-secrets.yaml
│ └── code-quality.yaml
└── templates/ # Project templates (optional)
└── backend-service.md
See
example/team-repo/for a complete example.
stag team validateChecks that:
.staghorn/source.yamlexists (warns if missing)CLAUDE.mdexists and is non-empty- Commands in
commands/have valid YAML frontmatter - Language configs in
languages/are valid markdown - Templates in
templates/are valid markdown (if present) - Evals in
evals/are valid YAML (if present)
Add comments that appear in source but are stripped from output:
## Code Review Guidelines
<!-- [staghorn] Tip: Customize this section in your personal.md -->
- All PRs require one approvalWhen installing from a new source, Staghorn shows a warning for untrusted repos. You can pre-trust sources in your config:
# ~/.config/staghorn/config.yaml
trusted:
- acme-corp # Trust all repos from this org
- community/python-config # Trust a specific repoPrivate repos auto-trust their org during stag init.
Pull different parts of your config from different repositories:
# ~/.config/staghorn/config.yaml
source:
default: my-company/standards # Base standards from your team
languages:
python: community/python-standards # Community Python config
go: my-company/go-standards # Team-specific Go config
commands:
security-audit: security-team/audits # Commands from another teamThis is useful when you want team standards for some things, but community best practices for specific languages.
Language configs are markdown files in languages/ directories, layered just like the main config:
- Team/community —
languages/in the source repo - Personal —
~/.config/staghorn/languages/ - Project —
.staghorn/languages/
- Global (
~/.claude/CLAUDE.md): Includes all available language configs - Project (
./CLAUDE.md): Auto-detects languages from marker files (e.g.,go.mod,pyproject.toml)
# ~/.config/staghorn/config.yaml
# Only include specific languages globally
languages:
enabled:
- python
- go
# Or exclude specific languages
languages:
disabled:
- javascript| Language | Marker Files |
|---|---|
| Python | pyproject.toml, setup.py, requirements.txt, Pipfile |
| Go | go.mod |
| TypeScript | tsconfig.json |
| JavaScript | package.json |
| Rust | Cargo.toml |
| Java | pom.xml, build.gradle |
| Ruby | Gemfile |
| C# | *.csproj, *.sln |
| Swift | Package.swift |
| Kotlin | build.gradle.kts |
When both TypeScript and JavaScript are detected, TypeScript takes precedence.
A command is a markdown file with YAML frontmatter:
---
name: security-audit
description: Scan for common security vulnerabilities
tags: [security, review]
args:
- name: path
description: Directory to audit
default: "."
- name: severity
description: Minimum severity
default: medium
options: [low, medium, high, critical]
---
# Security Audit
Review the code at {{path}} for security vulnerabilities.
Report issues at {{severity}} severity or higher.Commands can come from three sources (highest precedence first):
- Project —
.staghorn/commands/ - Personal —
~/.config/staghorn/commands/ - Team/community —
commands/in the source repo
Evals are YAML files that define behavioral tests for your Claude config. Each eval contains test cases that verify Claude responds appropriately given your CLAUDE.md guidelines.
name: security-secrets
description: Verify config detects and warns about hardcoded secrets
tags: [security, critical]
tests:
- name: warns-about-api-keys
description: Should warn when code contains hardcoded API keys
prompt: |
Review this code:
```python
API_KEY = "sk-1234567890abcdef"
client = OpenAI(api_key=API_KEY)
```
assert:
- type: llm-rubric
value: Response must warn about hardcoded API key and suggest using environment variables
- name: suggests-env-variables
prompt: |
How should I store my Stripe secret key in my Python app?
assert:
- type: contains-any
value:
["environment variable", "env var", "os.environ", "os.getenv", ".env"]Staghorn uses Promptfoo assertions:
| Type | Description |
|---|---|
llm-rubric |
AI-graded evaluation against a rubric (most flexible) |
contains |
Response contains exact string |
contains-any |
Response contains any of the listed strings |
contains-all |
Response contains all listed strings |
not-contains |
Response does not contain string |
regex |
Response matches regex pattern |
javascript |
Custom JavaScript assertion function |
Evals can come from four sources (all are loaded):
| Source | Location | Use case |
|---|---|---|
| Team | evals/ in source repo |
Shared team standards |
| Personal | ~/.config/staghorn/evals/ |
Your custom tests |
| Project | .staghorn/evals/ |
Project-specific tests |
| Starter | Built-in | Common baseline tests |
Install starter evals to customize them:
stag eval init # To personal directory
stag eval init --project # To project directoryTest different layers of your config independently:
stag eval --layer team # Test only team config
stag eval --layer personal # Test only personal additions
stag eval --layer project # Test only project config
stag eval --layer merged # Test full merged config (default)Evals can specify which config layers to test against:
name: team-security-standards
description: Verify team security guidelines are effective
context:
layers: [team] # Only test team config
languages: [python, go] # Include these language configs
provider:
model: ${STAGHORN_EVAL_MODEL:-claude-sonnet-4-20250514}
tests:
# ...Run evals in your CI pipeline:
# GitHub Actions example
- name: Run staghorn evals
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
stag eval --output githubOutput formats:
table— Human-readable table (default)json— Machine-readable JSONgithub— GitHub Actions annotations
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
Required for running evals and stag optimize |
STAGHORN_EVAL_MODEL |
Model to use for evals (default: claude-sonnet-4-20250514) |
~/.config/staghorn/config.yaml:
version: 1
# Simple: single source
source: "acme/standards"
# Or multi-source (see Multi-Source Configuration above)
# source:
# default: acme/standards
# languages:
# python: community/python-standards
# Trusted orgs/repos (skip confirmation prompts)
trusted:
- acme-corp
- community/python-standards
cache:
ttl: "24h" # How long to cache before re-fetching
languages:
auto_detect: true # Detect from project marker files
enabled: [] # Explicit list (overrides auto-detect)
disabled: [] # Languages to exclude| File | Purpose |
|---|---|
~/.config/staghorn/config.yaml |
Staghorn settings |
~/.config/staghorn/personal.md |
Your personal additions |
~/.config/staghorn/commands/ |
Personal commands |
~/.config/staghorn/languages/ |
Personal language configs |
~/.config/staghorn/evals/ |
Personal evals |
~/.config/staghorn/optimized/ |
Cached optimization results |
~/.cache/staghorn/ |
Cached team/community configs |
~/.claude/CLAUDE.md |
Output — merged global config |
.staghorn/project.md |
Project config source (you edit this) |
.staghorn/source.yaml |
Source repo marker (team repos only) |
.staghorn/commands/ |
Project-specific commands |
.staghorn/languages/ |
Project-specific language configs |
.staghorn/evals/ |
Project-specific evals |
./CLAUDE.md |
Output — merged project config |
When stag sync generates CLAUDE.md, it embeds HTML comments that track where each section originated. These comments are invisible to Claude but enable debugging and future features:
<!-- Generated by staghorn | Source: acme/standards | 2025-01-19 -->
<!-- staghorn:source:team -->
## Code Style
Team rules here...
<!-- staghorn:source:personal -->
### Personal Additions
Your additions here...
## Python
<!-- staghorn:source:team:python -->
### Python Guidelines
Python team rules...
<!-- staghorn:source:personal:python -->
### Personal Additions
Python personal prefs...Comment format:
<!-- staghorn:source:LAYER -->— Marks main content from a layer (team, personal, project)<!-- staghorn:source:LAYER:LANGUAGE -->— Marks language-specific content (e.g.,team:python,personal:go)
Each marker indicates where the following content originated. The next marker implicitly ends the previous section.
This enables:
- Debugging — See exactly where each rule came from
- Transparency — Understand the merge process
- Layer extraction — Parse and extract content by layer or full source
Use stag info --sources to generate a config with these annotations visible.
# Sync options
stag sync --fetch-only # Fetch without applying
stag sync --apply-only # Apply cached config without fetching
stag sync --force # Re-fetch even if cache is fresh
stag sync --offline # Use cached config only (no network)
stag sync --config-only # Sync config only, skip commands/languages
stag sync --commands-only # Sync commands only
stag sync --languages-only # Sync language configs only
# Search options
stag search --lang go # Filter by language
stag search --tag security # Filter by topic
stag search --limit 10 # Limit results
# Init options
stag init --from owner/repo # Install directly from a repo
# Edit options
stag edit # Edit personal config
stag edit project # Edit project config
stag edit team # Edit team config (only in source repos)
stag edit --no-apply # Edit without auto-applying
# Info options
stag info --content # Show full merged config
stag info --layer team # Show only team config (also: personal, project)
stag info --sources # Annotate output with source information
# Optimize options
stag optimize # Analyze merged config (informational)
stag optimize --diff # Show before/after diff
stag optimize --layer personal --apply # Optimize and save personal config
stag optimize --layer team --apply # Optimize and save team config
stag optimize --apply # Optimize merged and apply to all source layers
stag optimize --deterministic # No LLM, just cleanup (fast, no API key needed)
stag optimize --target 2000 # Target specific token count
stag optimize --force # Re-optimize even if cache is valid
stag optimize --no-cache # Skip cache read/write
stag optimize -o output.md # Write to custom file
# Command options
stag commands --tag security # Filter commands by tag
stag commands --source team # Filter by source (team, personal, project)
stag run <command> --dry-run # Preview command without rendering
# Eval options
stag eval # Run all evals
stag eval <name> # Run specific eval
stag eval --tag security # Filter by tag
stag eval --test <name> # Run specific test (or prefix pattern like "uses-*")
stag eval --layer team # Test specific config layer
stag eval --output json # Output format (table, json, github)
stag eval --verbose # Show detailed output
stag eval --debug # Show full responses and preserve temp files
stag eval --dry-run # Show what would be tested
stag eval list # List available evals
stag eval list --source team # Filter by source
stag eval info <name> # Show eval details
stag eval init # Install starter evals
stag eval init --project # Install to project directory
stag eval validate # Validate all eval definitions
stag eval validate <name> # Validate specific eval
stag eval create # Create new eval (interactive)
stag eval create --template security # Create from template
stag eval create --from <eval> # Copy from existing eval
stag eval create --project # Save to .staghorn/evals/
stag eval create --team # Save to ./evals/ for team sharingbrew tap HartBrook/tap
brew install staghorngo install github.com/HartBrook/staghorn/cmd/staghorn@latestThe stag alias is also available (symlink to staghorn).
Public repos — No authentication needed. Staghorn fetches community configs without any setup.
Private repos — You'll need GitHub access:
# Option 1: GitHub CLI (recommended)
brew install gh
gh auth login
# Option 2: Personal access token
export STAGHORN_GITHUB_TOKEN=ghp_xxxxxxxxxxxxIf you already have a ~/.claude/CLAUDE.md, the first stag sync will detect it and offer:
- Migrate — Move content to
~/.config/staghorn/personal.md - Backup — Save a copy before overwriting
- Abort — Cancel and leave unchanged
"No editor found"
export EDITOR="code --wait" # VS Code
export EDITOR="vim" # Vim"Could not authenticate with GitHub"
Either gh auth login or set STAGHORN_GITHUB_TOKEN.
"Cache is stale" warnings
Run stag sync --force to re-fetch.
Config not updating after edit
Make sure you saved. If using --no-apply, run stag sync --apply-only.
Languages not being detected
Check stag languages. Ensure marker files exist in project root.
Command not found
Run stag commands to see available commands. Project overrides personal, which overrides source.
"Promptfoo not found"
Evals require Promptfoo. Install with npm install -g promptfoo.
Evals failing unexpectedly
Use stag eval --debug to see full Claude responses and preserve temp files for inspection. Check ANTHROPIC_API_KEY is set. See the Evals Guide for debugging strategies.
"Optimization removed critical content"
The optimizer validates that tool names, file paths, and commands are preserved. See Anchor Validation for what's considered critical content. Generic variable names in code examples (like config, data, user) are intentionally excluded to avoid false positives. If validation still fails, use --force to apply anyway, or use --deterministic for a safer (but less aggressive) optimization.
Optimize not reducing tokens much
Try without --deterministic to enable LLM-powered compression. The deterministic mode only does cleanup (whitespace, duplicates, verbose phrases).
MIT