CLI wrapper for rlm with directory-as-context, JSON-first output, and self-documenting commands.
Upstream RLM: https://github.com/alexzhang13/rlm
This repo includes a Claude Code plugin with an rlm skill. The skill teaches Claude how to use the rlm CLI for code analysis, diff reviews, and codebase exploration.
Claude Code (Interactive)
/plugin marketplace add rawwerks/rlm-cli
/plugin install rlm@rlm-cli
Claude CLI
claude plugin marketplace add rawwerks/rlm-cli
claude plugin install rlm@rlm-cliThe /rlm skill gives Claude knowledge of:
- All rlm commands (
ask,complete,search,index,doctor) - Input types (files, directories, URLs, stdin, literal text)
- Common workflows (diff review, codebase analysis, search + analyze)
- Configuration and environment variables
- Exit codes for error handling
Once installed, Claude can use rlm to analyze code, review diffs, and explore codebases when you ask it to.
uv venv
uv pip install -e .uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm --helpAuthentication depends on the backend you choose:
openrouter:OPENROUTER_API_KEYopenai:OPENAI_API_KEYanthropic:ANTHROPIC_API_KEY
Export the appropriate key in your shell environment, for example:
export OPENROUTER_API_KEY=sk-or-...rlm ask . -q "Summarize this repo" --jsonrlm ask https://www.anthropic.com/constitution -q "Summarize this page" --jsonSame with uvx and OpenRouter:
uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm ask https://www.anthropic.com/constitution -q "Summarize Claude's constitution" --backend openrouter --model google/gemini-3-flash-preview --jsonrlm ask src/rlm_cli/cli.py -q "Explain the CLI flow" --jsongit diff | rlm ask - -q "Review this diff" --jsonrlm complete "Write a commit message" --jsonrlm complete "Say hello" --backend openrouter --model z-ai/glm-4.7:turbo --json--jsonoutputs JSON only on stdout.--output-format text|jsonsets output format.--backend,--model,--environmentcontrol the RLM backend.--backend-arg/--env-arg/--rlm-arg KEY=VALUEpass extra kwargs.--backend-json/--env-json/--rlm-json @file.jsonmerge JSON kwargs.--literaltreats inputs as literal text;--pathforces filesystem paths.--markitdown/--no-markitdowntoggles URL and non-text conversion to Markdown.--verboseor--debugenables verbose backend logging.
Full-text search via Tantivy and ripgrep for the LLM to explore codebases efficiently.
pip install 'rlm-cli[search]'This installs both Tantivy (ranked document search) and python-ripgrep (fast pattern matching).
When you run rlm ask on a directory, the LLM automatically gets access to two search tools in its REPL:
rg.search() - Fast pattern matching (ripgrep)
# Find exact patterns or regex matches
hits = rg.search(pattern="class.*Error", paths=["src/"], regex=True)
for h in hits:
print(f"{h['path']}:{h['line']}: {h['text']}")tv.search() - Ranked document search (Tantivy)
# Find relevant files by topic (BM25 ranking)
results = tv.search(query="error handling", limit=10)
for r in results:
print(f"{r['path']} (score: {r['score']:.2f})")When to use which:
rg.search()for: exact strings, function names, class definitions, importstv.search()for: concepts, topics, finding related files
The tools are pre-loaded - the LLM can use them directly without importing.
Index a directory:
rlm index ./srcSearch indexed documents:
rlm search "error handling" --path ./srcOptions:
--no-index- Skip auto-indexing directories--force- Force full reindex (withrlm index)
Defaults for rlm ask:
- respects
.gitignore - skips common dirs like
.git,node_modules,.venv - limits file and total bytes
Adjust with:
--extensions(repeat or comma-separated)--include/--exclude--max-file-bytes/--max-total-bytes--hidden,--follow-symlinks
Config files (YAML): ./rlm.yaml, ./.rlm.yaml, ~/.config/rlm/config.yaml.
Precedence: CLI flags > env vars > config > defaults.
0success2CLI usage error10input error11config error20backend error30runtime error40index error (search)
Install the pre-commit hook to run gitleaks on staged changes:
pre-commit install