One-shot CLI for OpenAI-compatible chat endpoints.
Create a default config:
ai --initEdit the generated file at $XDG_CONFIG_HOME/ai/config.toml (or ~/.config/ai/config.toml), then run:
ai "hello from cli"You can also pipe stdin:
echo "summarize this" | ai--model NAMEselect a model from[models]for one call.--prompt NAMEselect a preset prompt from[prompts]for one call.--minimaldisable status output and thinking colorization for that invocation.--strip-thinkingremove thinking text from output (before colorization).--initcreate a default config file.--history Nreplay the N-th most recent response to stdout (1 = last). Colorization obeys--minimal.--history-cleardelete stored history.--list-modelsprint available model names from config.--list-promptsprint available prompt names from config.--completions <shell>print shell completions (bash,zsh,fish).
The client reads $XDG_CONFIG_HOME/ai/config.toml (falls back to ~/.config/ai/config.toml).
[defaults]
model = "openai"
prompt = "concise"
minimal = false
strip_thinking = false
thinking_delimiters = [
{ start = "<think>", end = "</think>" },
{ start = "[thought]", end = "[/thought]" },
]
[history]
enabled = true
max_entries = 100
[prompts.concise]
text = "Be concise and direct."
[models.openai]
endpoint = "https://api.openai.com"
model = "gpt-4o-mini"
system_prompt = "You are a helpful assistant."
api_key_command = "pass show openai/api-key"
options = "{\"reasoning\":{\"enabled\":true}}"
strip_thinking = false
thinking_delimiters = [
{ start = "<think>", end = "</think>" },
]endpointaccepts eitherhttps://hostorhttps://host/v1or full/v1/chat/completions.api_keyandapi_key_commandare mutually exclusive.defaults.promptis required; use an empty prompt text if you want no additional preset prompt.system_prompt(model system prompt) is always applied first.- The selected named
prompt(preset prompt) is applied next. - The user input (CLI args or stdin) is applied last.
options(per model) is a raw JSON object merged into the request body; it cannot overridemodelormessages.strip_thinkingcan be set in defaults or per model;--strip-thinkingforces it on.- If the API returns a separate reasoning field, it is injected before the main content using the configured thinking delimiters (unless the content already contains delimiters).
- If a response contains only the end delimiter (no start), everything up to that end delimiter is treated as thinking text.
historyis stored in$XDG_CACHE_HOME/ai/history.json.
Release build:
cargo build --releaseStatic binary (example for musl on Linux):
rustup target add x86_64-unknown-linux-musl
cargo build --release --target x86_64-unknown-linux-muslGenerate and install (examples):
ai --completions bash > /etc/bash_completion.d/aiai --completions zsh > /usr/local/share/zsh/site-functions/_aiai --completions fish > ~/.config/fish/completions/ai.fish