A unified shell-based system for prompting various LLM providers from markdown files.
LangHub provides a simple, consistent interface to interact with multiple AI language models through a single set of scripts. Process .lmd (LLM Markdown) files to generate documentation, analyze code, or automate AI-powered workflows.
- Unified Interface: Single API for Ollama, Claude, and GitHub Copilot
- .lmd File Processing: Define AI workflows in markdown with embedded prompts
- Context Support: Automatically include files/directories as context
- Provider Flexibility: Switch between local and cloud models easily
- Shell-Based: Pure bash scripts, no additional dependencies beyond provider tools
- Extensible: Easy to add new providers
Ollama:
# Install Ollama
./lh install ollama
# Pull a model
ollama pull qwen2.5-coder:7b
# Check status and list models
./lh login ollamaClaude:
# Install Claude CLI
./lh install claude
# Login to Claude (opens browser)
./lh login claudeCopilot:
# Subscribe at https://github.com/features/copilot
# Install Copilot CLI
./lh install copilot
# Login to Copilot (opens browser)
./lh login copilot# List available models
./lh list <engine>
# Prompt a model
./lh ask <engine> <model-id> "Your prompt text"
# Process .lmd files
./lh render input.lmd > output.mdExample workflow:
# Install and setup Ollama
./lh install ollama
ollama pull qwen2.5-coder:7b
./lh login ollama
# Use it
./lh ask ollama qwen2.5-coder:7b "Write a hello world in Python"git clone https://github.com/yourusername/langhub.git
cd langhub
chmod +x lh *.shAdd LangHub to your project:
git submodule add https://github.com/yourusername/langhub.git scripts
cd scripts
chmod +x lh *.shThen use it in your project:
scripts/lh ask ollama qwen2.5-coder:7b "Your prompt"LangHub provides shell completion for the lh command to make it easier to use. Completions are available for Bash, Zsh, and Fish shells.
Features:
- Command completion (
install,login,ask,list,render,help) - Engine completion (
ollama,claude,copilot,all) - Model completion (dynamically lists available models for each engine)
- Option completion (
--context,--output,--log) - File completion for
.lmdfiles inrendercommand
Installation:
Bash:
# Temporary (current session only)
source completion/lh.bash
# Permanent (add to ~/.bashrc)
echo "source $(pwd)/completion/lh.bash" >> ~/.bashrc
# System-wide (requires sudo)
sudo cp completion/lh.bash /etc/bash_completion.d/lhZsh:
# Add to your fpath (add to ~/.zshrc)
fpath=($(pwd)/completion $fpath)
autoload -Uz compinit && compinit
# Or copy to a standard location
cp completion/lh.zsh /usr/local/share/zsh/site-functions/_lhFish:
# Copy to Fish completions directory
mkdir -p ~/.config/fish/completions
cp completion/lh.fish ~/.config/fish/completions/Usage:
After installation, you can use Tab to complete commands, engines, and models:
lh <Tab> # Shows: install login ask list render help
lh install <Tab> # Shows: ollama claude copilot all
lh ask ollama <Tab> # Shows: available Ollama models
lh ask claude <Tab> # Shows: claude-sonnet-4-5, claude-3-5-sonnet-20241022, ...
lh render <Tab> # Shows: *.lmd fileslh <command> [args...]- Main entrypoint for all LangHub commandslh install <engine>- Install an enginelh login <engine>- Login/authenticate with an enginelh list <engine>- List available modelslh ask <engine> <model-id> <prompt> [options]- Prompt a modellh render <file.lmd>- Process .lmd fileslh help- Show help message
Main installer and provider-specific scripts:
install.sh <engine>- Main installer (routes to provider scripts)install_ollama.sh- Install Ollama (viacurl -fsSL https://ollama.com/install.sh | sh)install_claude.sh- Install Claude CLI (viacurl -fsSL https://claude.ai/install.sh | bash)install_copilot.sh- Install Copilot CLI (viacurl -fsSL https://gh.io/copilot-install | bash)
Main login handler and provider-specific scripts:
login.sh <engine>- Main login handler (routes to provider scripts)login_ollama.sh- Check Ollama status (no auth required)login_claude.sh- Authenticate with Claude (viaclaude login)login_copilot.sh- Authenticate with Copilot (viacopilot auth login)
Main scripts and provider-specific implementations:
list.sh <engine>- List available models (routes to provider)ask.sh <engine> <model-id> <prompt> [options]- Prompt a model (routes to provider)render.sh <file.lmd>- Process .lmd files to markdown
Provider-specific scripts:
list_<engine>.sh/ask_<engine>.sh- Provider implementations
Current providers:
ollama- Local Ollama modelsclaude- Anthropic Claude APIcopilot- GitHub Copilot API
# Using Ollama
./lh ask ollama qwen2.5-coder:7b "Write a Python function to calculate factorial"
# Using Claude
./lh ask claude claude-3-5-sonnet-20241022 "Explain quantum computing"
# Using Copilot
./lh ask copilot claude-sonnet-4.5 "Refactor this code for better performance"# Provide context from a directory
./lh ask ollama qwen2.5-coder:7b "Summarize the documentation" --context lib/
# Provide context from a file
./lh ask claude claude-3-5-sonnet-20241022 "Explain this" --context README.mdCreate a .lmd file (e.g., prompts.lmd):
# My Analysis
```ollama, model=qwen2.5-coder:7b, log=ollama.log
Provide a brief summary of the key features.
```
```claude, model=claude-3-5-sonnet-20241022, context=lib/, log=claude.log
Analyze the code architecture and suggest improvements.
```
```copilot, model=claude-sonnet-4.5, context=lib/, log=copilot.log
Generate comprehensive documentation for this codebase.
```Process it:
./lh render prompts.lmd > output.mdThis will:
- Execute each code block with the specified model
- Replace code blocks with model responses in the output
- Save logs to the specified files (optional)
.lmd files are markdown files with special code blocks for LLM prompts:
```<engine>, model=<model-id>, context=<path>, log=<logfile>
<prompt text>
```-
<engine>(required): Provider to useollama,ollama:http://custom:11434claudecopilotprint(for testing, just echoes the prompt)
-
model=<model-id>(required for non-print engines): Model identifier- Ollama:
qwen2.5-coder:7b,mistral:7b, etc. - Claude:
claude-3-5-sonnet-20241022, etc. - Copilot:
claude-sonnet-4.5,gpt-4, etc.
- Ollama:
-
context=<path>(optional): Context files/directory- File:
context=README.md - Directory:
context=lib/(loads all .md, .txt, .rst files)
- File:
-
log=<logfile>(optional): Log file for model messages- Relative or absolute path
- Directory will be created if it doesn't exist
langhub/
├── README.md # This file
├── README-ollama.md # Ollama setup guide
├── README-claude.md # Claude setup guide
├── README-copilot.md # Copilot setup guide
│
├── lh # Unified entrypoint (recommended)
│
├── install.sh # Main installer
├── install_ollama.sh # Ollama installer
├── install_claude.sh # Claude installer
├── install_copilot.sh # Copilot installer
│
├── login.sh # Main login handler
├── login_ollama.sh # Ollama status checker
├── login_claude.sh # Claude authenticator
├── login_copilot.sh # Copilot authenticator
│
├── list.sh # Main: list models
├── list_ollama.sh # Ollama: list models
├── list_claude.sh # Claude: list models
├── list_copilot.sh # Copilot: list models
│
├── ask.sh # Main: prompt models
├── ask_ollama.sh # Ollama: prompt models
├── ask_claude.sh # Claude: prompt models
├── ask_copilot.sh # Copilot: prompt models
│
├── render.sh # Main: process .lmd files
│
├── completion/ # Shell completion scripts
│ ├── lh.bash # Bash completion
│ ├── lh.zsh # Zsh completion
│ └── lh.fish # Fish completion
│
└── test/ # Test suite
├── tests.sh # Run all tests
├── test-list.sh # Test list scripts
├── test-ask.sh # Test ask scripts
└── test-render.sh # Test render script
bash(any recent version)curl(for HTTP requests)
jq(for better JSON parsing and error messages)# Install jq sudo apt install jq # Debian/Ubuntu brew install jq # macOS
- Ollama: Ollama installed and running
- Claude: Claude CLI installed and authenticated
- Copilot: Copilot CLI installed and authenticated
Run the test suite:
cd test
./tests.shRun individual test suites:
./test-list.sh # Test list scripts
./test-ask.sh # Test ask scripts
./test-render.sh # Test render script"Permission denied" when running scripts:
chmod +x *.sh"jq: command not found" warnings:
# Install jq for better error handling (optional)
sudo apt install jq # or: brew install jqSee the provider README files for detailed troubleshooting:
# Use Ollama on a different host
./lh list ollama:http://192.168.1.100:11434
./lh ask ollama:http://192.168.1.100:11434 qwen2.5-coder:7b "Your prompt"# Install engines
./lh install ollama # Install Ollama
./lh install claude # Install Claude CLI
./lh install copilot # Install Copilot CLI
./lh install all # Install all engines
# Authenticate with engines
./lh login ollama # Check Ollama status (no auth needed)
./lh login claude # Login to Claude (browser auth)
./lh login copilot # Login to Copilot (browser auth)
./lh login all # Login to all engines
# Or use provider commands directly
ollama pull qwen2.5-coder:7b # Pull Ollama model
claude login # Claude authentication
copilot auth login # Copilot authentication# Process multiple .lmd files
for file in *.lmd; do
echo "Processing $file..."
./lh render "$file" > "${file%.lmd}.md"
doneContributions are welcome! To add a new provider:
- Create
install_<provider>.sh- Installation script - Create
login_<provider>.sh- Authentication script - Create
list_<provider>.sh- List models script - Create
ask_<provider>.sh- Prompt models script - Update
install.sh,login.sh,list.sh, andask.shto include the new provider - Create
README-<provider>.md- Provider documentation - Add tests in
test/test-ask.shandtest/test-list.sh - Submit a pull request
See CONTRIBUTING.md for more details.
MIT License - see LICENSE file for details.
Created for processing LLM Markdown (.lmd) files in AI-powered documentation workflows.
- Ollama - Run LLMs locally
- Anthropic Claude - Claude AI models
- GitHub Copilot - AI pair programmer