A powerful terminal-based AI assistant that helps you get things done on your computer through natural conversation.
Screen.Recording.2026-01-09.at.20.37.37.mov
- 🎭 Three Operating Modes - Default (full-power), Lite (fast & safe), and Mentor (collaborative problem-solving)
- 🤖 Multi-Provider Support - Works with OpenAI, OpenRouter, and OpenAI-compatible APIs
- 🔒 Safe Execution - Every command requires your explicit approval with diff preview
- 🛠️ Advanced Tools - Shell execution, file patching, search/replace, grep, find files, file reading, web search, and mentor consultation
- 💬 Slash Commands - Quick actions like
/clear,/quit,/model,/mentor,/litefor easy control - 📝 Smart Context - The assistant understands your environment and provides relevant help
- 🎯 Streaming Responses - See the AI's thoughts and reasoning in real-time
- 🧠 Reasoning Effort Control - Configurable reasoning levels (minimal to high) for O1/O3 models
- ⚡ Command History - Navigate previous inputs with arrow keys
- 🎨 Markdown Rendering - Formatted code blocks and text in the terminal
- 🔄 Retry Logic - Automatic recovery from tool hallucinations and upstream errors
- 🌐 SSH Mode - Execute commands and edit files on remote servers over SSH
Requirements:
- Node.js 16 or higher
- An API key from OpenAI, OpenRouter, or any OpenAI-compatible provider
Install globally via npm:
npm install --global @qduc/term2Set your API key as an environment variable (see Configuration section for details):
export OPENAI_API_KEY="your-api-key-here"Start the assistant:
term2 # Start in default mode (full capabilities)
term2 --lite # Start in lite mode (fast, read-only)Then simply chat with the AI! Type your question or request, press Enter, and the assistant will help you.
New to term2?
- Working on a codebase/project? Use default mode:
term2 - Just need general terminal help? Use lite mode:
term2 --lite - Tackling a complex problem? Enable mentor mode with
/mentorcommand
See the "Operating Modes" section below for full details.
"What files are in my current directory?"
"Show me my git status"
"Create a backup of my documents folder"
"What's using port 3000?"
"Find all TODO comments in my JavaScript files"
"Help me debug why my server won't start on port 8080"
"Create a new React component called UserProfile"
"Show me the disk usage of my home directory"
"What processes are consuming the most memory?"
"Search for the word 'config' in all .json files"
# Model selection
term2 # Start with default model (gpt-5.1)
term2 -m gpt-5.2 # Use a specific model
term2 --model gpt-5-mini # Use GPT-5 mini for faster/cheaper responses
term2 -r high # Set reasoning effort to high (for GPT-5 models)
term2 --reasoning medium # Set reasoning effort to medium
# Operating modes (see "Operating Modes" section above for details)
term2 --lite # Start in lite mode for general terminal work (no codebase)
# SSH Mode - execute on remote servers
term2 --ssh user@host --remote-dir /path/to/project
term2 --ssh deploy@server.com --remote-dir /var/www/app --ssh-port 2222
# Combine SSH with lite mode for lightweight remote assistance
term2 --ssh user@host --remote-dir /path --liteWhile in the chat, you can use these commands:
/clear- Clear the conversation history/quit- Exit the application/model [model-name]- Switch to a different model/mentor- Toggle mentor mode (see "Operating Modes" section for details)/lite- Toggle lite mode (see "Operating Modes" section for details)/settings [key] [value]- Modify runtime settings (e.g.,/settings agent.temperature 0.7)
term2 offers three modes tailored to different workflows. Choose the mode that matches your current task.
| Mode | Start with | Best for | Tools Available | Context |
|---|---|---|---|---|
| Default | term2 |
Codebase work & development | All editing tools | Full codebase |
| Lite | term2 --lite |
General terminal tasks (no codebase) | Read-only | None |
| Mentor | Use /mentor |
Complex codebase problems | All + mentor | Full codebase |
The problem it solves: You need a general-purpose terminal assistant for everyday system tasks—not working with a codebase or project.
Lite mode is designed for general terminal work: system administration, file management, running commands, investigating logs, and SSH sessions. It's not for codebase/project work (no code editing tools, no project context loading). Think of it as your everyday terminal companion for non-coding tasks.
Key benefits:
- 🚀 Fast and lightweight - No codebase context, no project file loading, quick startup
- 🔧 General terminal tools - Shell commands, grep, read files, find files (no code editing)
- 🌐 Perfect for SSH - Ideal for remote server management and investigation
- 🔄 Toggleable - Switch on/off mid-session with
/litecommand - 🐚 Shell mode - Press Shift+Tab to toggle direct shell command execution
When to use Lite mode:
- System administration and server management tasks
- Investigating logs, config files, and system issues
- File system navigation and organization
- SSH into servers for maintenance or debugging
- General terminal help when not working on a codebase
- Quick command help and syntax lookups
Example:
# Everyday terminal assistant (not working with code)
term2 --lite
# SSH server management and investigation
term2 --ssh deploy@server.com --lite
# Remote server debugging
term2 --ssh user@host --remote-dir /var/log --liteThe problem it solves: You're tackling a complex codebase problem and need a different perspective or expert consultation.
Mentor mode gives you two AI minds working together on your codebase. Your primary AI does the reconnaissance and implementation work, while a separate (often more powerful) mentor model provides strategic guidance, challenges assumptions, and suggests alternatives. Think of it as pair programming with an expert consultant.
Key benefits:
- 🧠 Two minds, better solutions - Primary AI gathers context, mentor provides strategic guidance
- 🔍 Structured problem-solving - Forces clear articulation of findings before jumping to solutions
- ❓ Challenges assumptions - Mentor actively probes for gaps and alternative approaches
- 🎯 You stay in control - Mentor advises, primary AI implements your decisions
How it works:
- You give the AI a task or problem
- The AI does quick reconnaissance of your codebase
- AI consults the mentor using
ask_mentortool with findings and questions - Mentor challenges assumptions and provides strategic guidance
- AI implements the solution based on the guidance
Important: The mentor model doesn't have direct access to your codebase. Your primary AI must share all relevant information (code snippets, file paths, findings) when consulting the mentor. This forces clear problem articulation and save cost on the more expensive mentor model.
When to use Mentor mode:
- Architectural decisions with multiple valid approaches
- "Am I missing something?" sanity checks before major refactoring
- Stuck on a problem after trying multiple solutions
- Designing new features that must fit existing patterns
- Getting a pre-commit review of your approach
Requirements:
- Configure
agent.mentorModelin settings (e.g.,gpt-5.2,claude-opus-4.5orgemini-3-pro-preview) - Mentor model should be more capable than primary model for best results
- Toggle with
/mentorcommand mid-session
Example workflow:
You: "Add dark mode support to the app"
AI: [Searches for theme infrastructure]
[Finds ThemeProvider at src/context/ThemeContext.tsx]
AI: [Consults mentor] "User wants dark mode. Found ThemeProvider that uses
CSS variables like --background-color. Currently fixed to 'light' theme.
Propose extending this provider to toggle themes. Confidence: High.
Does this approach make sense?"
Mentor: "Good find. Before proceeding, check:
1. Are CSS variables used consistently across all components?
2. Any third-party UI libraries that need theme integration?
3. Should theme preference persist across sessions?
Missing any of these could create inconsistent theming."
AI: [Does additional checks based on mentor's questions]
[Implements solution addressing all concerns]
Modes are mutually exclusive—each represents a different working style matched to your task. You can switch modes mid-session:
/lite- Toggle lite mode (clears history when switching)/mentor- Toggle mentor mode- Switching to lite mode automatically disables edit/mentor modes
- Enabling edit or mentor mode automatically disables lite mode
SSH mode enables term2 to execute commands and modify files on remote servers over SSH. This is useful for managing remote deployments, debugging server issues, or working on remote development environments.
- SSH agent running with your keys loaded (
ssh-add) - SSH access to the target server
--remote-diris required to specify the working directory (optional in lite mode - will auto-detect)
# Basic usage
term2 --ssh user@hostname --remote-dir /path/to/project
# With custom SSH port
term2 --ssh user@hostname --remote-dir /path/to/project --ssh-port 2222
# With lite mode (auto-detects remote directory)
term2 --ssh user@hostname --liteWhen SSH mode is enabled:
- term2 establishes an SSH connection using your SSH agent for authentication
- All shell commands are executed on the remote server via SSH
- File operations (read, write, patch) are performed remotely using shell commands (
cat, heredocs) - The working directory is set to
--remote-diron the remote server - The connection is automatically closed when you exit term2
SSH mode works seamlessly with lite mode for lightweight remote terminal assistance:
term2 --ssh user@host --remote-dir /path/to/project --liteThis combination provides:
- Remote command execution over SSH
- Read-only tools (grep, find_files, read_file) for exploration
- Minimal context and faster responses
- No file editing tools (safer for production servers)
- Authentication is via SSH agent only (no password prompts)
- Binary file operations are not supported (text files only)
- Large file transfers may be slower than local operations
term2 stores its configuration in:
- macOS:
~/Library/Logs/term2-nodejs/settings.json - Linux:
~/.local/state/term2-nodejs/settings.json
API keys should be set as environment variables for security (never commit them to git):
# OpenAI (default provider)
export OPENAI_API_KEY="sk-..."
# OpenRouter (for Claude, Gemini, and other models)
export OPENROUTER_API_KEY="sk-or-v1-..."
# Web Search (Tavily)
export TAVILY_API_KEY="tvly-..."To make them permanent, add these exports to your shell configuration file (~/.bashrc, ~/.zshrc, or ~/.profile).
Settings (model, provider, temperature, etc.) can be configured via:
- App menu - Use
/settingscommand during a session (e.g.,/settings agent.model gpt-5.2) - Settings file - Manually edit the JSON file:
- macOS:
~/Library/Logs/term2-nodejs/settings.json - Linux:
~/.local/state/term2-nodejs/settings.json
- macOS:
- CLI flags - Override for a single session (e.g.,
-m gpt-5.2)
You can easily switch between providers by editing settings.json.
1. OpenAI (Default)
{
"agent": {
"provider": "openai",
"model": "gpt-5.1",
"temperature": 0.7
}
}2. OpenRouter (Claude, Gemini, DeepSeek) Access a wide range of models.
{
"agent": {
"provider": "openrouter",
"model": "anthropic/claude-4.5-sonnet"
}
}3. Local LLMs (Llama.cpp, LM Studio, vLLM)
Run entirely locally for privacy and zero cost. To use a local provider, add it to the providers list and then select it.
Llama.cpp Example:
{
"providers": [
{
"name": "llama.cpp",
"baseUrl": "http://127.0.0.1:8080/v1"
}
],
"agent": {
"provider": "llama.cpp",
"model": "qwen3-coder"
}
}LM Studio Example:
{
"providers": [
{
"name": "lm-studio",
"baseUrl": "http://localhost:1234/v1"
}
],
"agent": {
"provider": "lm-studio",
"model": "local-model"
}
}{
"shell": {
"timeout": 120000,
"maxOutputLines": 1000
},
"agent": {
"reasoningEffort": "medium",
"mentorModel": "gpt-5.2"
}
}term2 works with multiple AI providers:
gpt-5.2(latest)gpt-5.1(default)gpt-5gpt-5-minigpt-4.1gpt-4.1-minigpt-5.1gpt-5.1-minio3(supports reasoning effort)o3-mini(supports reasoning effort)o1(supports reasoning effort)
Access hundreds of models through OpenRouter including:
- Claude models (Anthropic)
- Gemini models (Google)
- Open-source models (Deepseek, GLM, Minimax, Devstral, etc.)
Use CLI flags (-m model-name) or settings file to select OpenRouter models.
term2 can connect to any OpenAI-compatible API. This allows you to use:
- Local Models: Run private models locally via Ollama, LM Studio, vLLM, or LocalAI.
- Self-Hosted: Connect to private deployments of models.
- Other Providers: Any service offering an OpenAI-compatible endpoint (e.g., Groq, Together AI).
- Command Approval - Every destructive operation requires your explicit confirmation
- Diff Preview - See exact file changes before approving patches or edits
- Risk Analysis - Dangerous operations (like
rm -rf,git push --force) are flagged - Path Safety - Operations on sensitive directories require extra caution
- Dry-Run Validation - Patches are validated before approval to prevent errors
- No Hidden Actions - All tool usage is transparent and visible
- Retry Limits - Automatic abort after consecutive tool failures (default: 3)
- You type a message and press Enter
- The AI analyzes your request and determines if it needs to execute commands
- If a command is needed, you'll see a preview and approval prompt
- After approval, the command runs and results are shown
- The AI uses the results to provide a helpful response
- You stay in full control - reject any command with 'n'
Want to contribute or run from source?
# Clone the repository
git clone https://github.com/qduc/term2.git
cd term2
# Install dependencies
npm install
# Run in development mode
npm run dev
# Run tests
npm test
# Build
npm run buildMake sure you've exported your OpenAI API key:
export OPENAI_API_KEY="sk-..."After installation, you may need to restart your terminal or run:
source ~/.bashrc # or ~/.zshrcIf you get permission errors during global installation, use:
sudo npm install --global @qduc/term2Or configure npm to install globally without sudo: https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally
Make sure your SSH agent is running and has your keys loaded:
# Start SSH agent if not running
eval "$(ssh-agent -s)"
# Add your SSH key
ssh-add ~/.ssh/id_rsa
# Verify the key is loaded
ssh-add -lAlso verify you can connect manually: ssh user@hostname
When using --ssh without --lite, you must also specify --remote-dir:
term2 --ssh user@host --remote-dir /home/user/projectWith --lite mode, --remote-dir is optional and will auto-detect:
term2 --ssh user@host --lite- Choose the right mode - Use lite mode for general terminal work (not codebase), default mode for codebase work, mentor mode for complex codebase problems (see "Operating Modes" section)
- The assistant won't run dangerous commands without your approval
- You can reject any command by choosing 'No' when prompted
- Press Ctrl+C to exit the chat at any time
- Use arrow keys to navigate through your command history
- Be specific in your requests for better results
- Use
/mentorto get expert consultation on difficult architectural decisions - Use
--liteflag when SSH'ing to servers for general system work without codebase context
Contributions are welcome! Please feel free to submit a Pull Request or open an Issue on GitHub.
MIT License - see LICENSE file for details
Built with:
- OpenAI Agents SDK
- Ink - React for CLI
- TypeScript
- ssh2 - SSH client for Node.js
Made with ❤️ by qduc