Skip to content

llm-cli is a local, offline CLI AI agent that uses llama.cpp, capable of executing shell commands, manipulating files, and performing web searches directly from the terminal.

Notifications You must be signed in to change notification settings

nemmusu/llm-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

llm-cli

A user-friendly CLI interface for llama.cpp server with tool calling support.

Python License

Table of Contents

Screenshots

Startup Screen

Startup Screen

Interactive Chat

Interactive Chat

Features

  • 🎯 Interactive CLI: Chat with your llama.cpp server in an interactive terminal
  • πŸ”§ Tool Calling: Built-in tools for file operations, shell commands, and more
  • βœ… Confirmation System: Safe execution with user confirmation prompts (y/n/a)
  • πŸ“ System Prompts: Customizable system prompts and context files
  • πŸ—‚οΈ Context Hierarchy: Automatic discovery of context files (LLM.md) in your project
  • πŸš€ Non-Interactive Mode: Test and debug with -p flag
  • πŸ”„ Auto-Restore: Automatically restore all modified files on exit
  • 🌐 Web Search: Built-in web search using DuckDuckGo
  • ⌨️ History Navigation: Navigate previous commands with arrow keys (↑/↓)
  • 🎨 Beautiful UI: Rich terminal formatting with colors, panels, and markdown support

Installation

Prerequisites

  • Python 3.8 or higher
  • A running llama.cpp server (default: http://127.0.0.1:10000)

Automatic Installation (Recommended)

Simply run the installer:

./install.sh --install

Or use the launcher (it will install automatically if needed):

./llm-cli-launcher

The installer will:

  • Create a virtual environment in ~/.llm-cli/venv
  • Install all dependencies automatically
  • Create a llm-cli command in ~/.local/bin
  • Set up everything automatically

After installation, you can use llm-cli from anywhere (if ~/.local/bin is in your PATH).

Note: If ~/.local/bin is not in your PATH, add this to your ~/.bashrc or ~/.zshrc:

export PATH="$HOME/.local/bin:$PATH"

Manual Installation

If you prefer to install manually:

git clone https://github.com/yourusername/llm-cli.git
cd llm-cli
pip install -e .

Uninstallation

To completely remove llm-cli:

llm-cli --uninstall

Or if using the installer directly:

./install.sh --uninstall

This will remove:

  • The launcher script from ~/.local/bin
  • The virtual environment
  • Optionally the config directory (you'll be prompted)

Usage

Basic Usage

Interactive Mode

Start the interactive CLI:

llm-cli

Or run directly from the project directory:

./llm-cli

Or specify a custom server:

llm-cli --host http://localhost:8080

Note: All file modifications are automatically tracked and can be restored on exit. Type restore during the session to restore files immediately, or they will be restored automatically when you exit.

Commands in Interactive Mode

  • exit or quit: Exit the CLI (will prompt to restore files)
  • clear: Clear the terminal screen
  • restore: Restore all modified files immediately
  • Ctrl+C: Interrupt current operation (press twice quickly to exit)
  • ↑/↓: Navigate through input history

Non-Interactive Mode

Run a single prompt and exit:

llm-cli -p "List all Python files in the current directory"

Or with a custom host:

llm-cli --host http://192.168.1.100:10000 -p "What files are in this directory?"

This is useful for testing and debugging. Files modified during execution will be automatically restored when the command completes.

Command Line Options

Options:
  --host TEXT          Server host (default: http://127.0.0.1:10000)
  -p, --prompt TEXT    Prompt for non-interactive mode
  -m, --model TEXT     Model name (optional, defaults to "default")
  -y, --auto-confirm   Automatically confirm all tool executions and shell commands without asking
  --debug              Enable debug mode
  --help               Show this message and exit

Model Option (-m, --model): Specifies the model name to use when sending requests to the llama.cpp server. This is useful when:

  • Your server has multiple models loaded and you want to select a specific one
  • Your server requires a specific model name instead of the default
  • You're using a custom model name that differs from "default"

If not specified, the CLI will use "default" as the model name.

Examples

# Interactive mode (default host: http://127.0.0.1:10000)
llm-cli

# Interactive mode with custom host
llm-cli --host http://192.168.1.100:10000

# Non-interactive mode with default host
llm-cli -p "What files are in this directory?"

# Non-interactive mode with custom host
llm-cli --host http://localhost:8080 -p "List all Python files"

# Debug mode
llm-cli --debug -p "Read the README file"

# Auto-confirm mode (no prompts for tool execution)
llm-cli -y -p "Create test.txt with content 'Hello World' and list directory"

# Specify a custom model name
llm-cli -m "my-custom-model" -p "Hello, what can you do?"

# Combine options: custom model, host, and auto-confirm
llm-cli --host http://192.168.1.100:10000 -m "gpt-4" -y -p "List directory"

# Run directly from project directory
./llm-cli --host http://127.0.0.1:10000 -p "Test connection"

System Prompts and Context

System Prompt

The system prompt defines the base behavior of the model. It is optional - if no system prompt file exists, the CLI will work without it.

The CLI searches for system prompt files in this order:

  1. Project root: system_prompt.md in the current working directory
  2. Git root: system_prompt.md in the git repository root (if in a git repo)
  3. Home directory: ~/.llm-cli/system.md
  4. Environment variable: Path specified by LLM_CLI_SYSTEM_MD

Note: The project includes a default system_prompt.md file in the root with comprehensive instructions for the agent. You can customize it or create your own.

Context Files (LLM.md)

Context files provide project-specific instructions and information. They are automatically discovered and loaded in this order:

  1. Global: ~/.llm-cli/LLM.md - Context for all your projects
  2. Project: LLM.md or LLM_CLI.md files from git root to current directory
  3. Subdirectory: LLM.md files in subdirectories (respects .gitignore)

Example LLM.md

# Project: My Python CLI

## Coding Style
- Use 4 spaces for indentation
- Follow PEP 8 guidelines
- Always add type hints

## Project Structure
- Main code in `src/`
- Tests in `tests/`
- Use pytest for testing

Import Support

You can import other files in your context files using @file.md syntax:

# Main Context

@./docs/architecture.md
@../shared/style-guide.md

Configuration

Environment variables:

  • LLM_CLI_SYSTEM_MD: Path to custom system prompt file
  • LLM_CLI_CONTEXT_FILENAME: Context file names (default: LLM.md,LLM_CLI.md)
  • LLM_CLI_HOME: Home directory for global files (default: ~/.llm-cli)

Available Tools

The CLI includes several built-in tools:

  • read_file: Read the contents of a file
  • write_file: Write content to a file (requires confirmation)
  • run_shell_command: Execute shell commands (requires confirmation)
  • list_directory: List directory contents
  • search_files: Search for files matching a pattern
  • web_search: Search the web using DuckDuckGo (no confirmation required)

Tool Confirmation

Tools that modify the system (write_file, run_shell_command) require confirmation:

  • y (Yes): Execute once
  • n (No): Cancel
  • a (Always): Always allow this tool or shell command for this session (preference resets on new session)

Note: You can use the -y or --auto-confirm flag when starting llm-cli to automatically confirm all tool executions and shell commands without any prompts. This is useful for automated scripts or when you trust the model completely.

For shell commands, you can also allow specific commands individually (e.g., allow cat and grep but require confirmation for others).

Preferences are only stored in memory for the current session and reset when you start a new session.

Development

Project Structure

llm-cli/
β”œβ”€β”€ llm_cli/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ __main__.py          # Entry point
β”‚   β”œβ”€β”€ cli.py               # Main CLI interface
β”‚   β”œβ”€β”€ client.py            # llama.cpp API client
β”‚   β”œβ”€β”€ tools.py             # Tool definitions and execution
β”‚   β”œβ”€β”€ confirmation.py      # Confirmation system
β”‚   β”œβ”€β”€ system_prompt.py     # System prompt manager
β”‚   β”œβ”€β”€ backup.py            # File backup and restore
β”‚   └── utils.py             # Utility functions
β”œβ”€β”€ screenshots/             # Screenshots for documentation
β”‚   β”œβ”€β”€ screenshot.png       # Startup screen
β”‚   └── screenshot2.png      # Interactive chat example
β”œβ”€β”€ system_prompt.md         # Default system prompt
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ setup.py
β”œβ”€β”€ install.sh               # Installation script
β”œβ”€β”€ llm-cli-launcher         # Launcher script
└── README.md

Troubleshooting

Connection Issues

If you get connection errors:

  1. Make sure the llama.cpp server is running
  2. Check the server host and port
  3. Verify the server supports OpenAI-compatible API

Tool Calling Not Working

Ensure your llama.cpp server supports function calling. Some models may need specific configuration.

Context Files Not Loading

  • Check file names match LLM.md or LLM_CLI.md
  • Verify files are readable
  • Use --debug flag to see what's being loaded

License

MIT License

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

About

llm-cli is a local, offline CLI AI agent that uses llama.cpp, capable of executing shell commands, manipulating files, and performing web searches directly from the terminal.

Topics

Resources

Stars

Watchers

Forks