-
Notifications
You must be signed in to change notification settings - Fork 1
Add LiteLLM provider support #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add new litellm provider that proxies through a local LiteLLM server. This allows using models from various providers through a unified interface. The provider reads LITELLM_MODEL env var to override model selection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for LiteLLM as a model provider, enabling Laser to proxy requests through a local LiteLLM server. The integration follows the existing provider pattern and includes environment variable configuration for flexible deployment.
Changes:
- Added
litellmprovider configuration usingOpenAILegacywith customizable base URL and API key - Implemented environment variable override for model selection via
LITELLM_MODEL - Added convenience script
run.shfor local testing with LiteLLM - Modified
.gitignoreto exclude all markdown files
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| laser/main.py | Adds LiteLLM provider configuration and environment variable-based model override logic |
| run.sh | Provides convenience script for running Laser with LiteLLM configuration |
| .gitignore | Adds pattern to ignore all markdown files |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| exit | ||
|
|
||
| o use: |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete text. Line 8 appears to be missing the first character. Should be 'To use:' instead of 'o use:'.
| o use: | |
| To use: |
| $: uv tool install git+https://github.com/ExpressGradient/laser | ||
| $: laser | ||
|
|
||
| or |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing capital letter. Should be 'Or' instead of 'or' for proper sentence structure.
| or | |
| Or |
| export LITELLM_MODEL=glm-4.7 | ||
| export LITELLM_PROXY_API_KEY=sk-1234 | ||
| export LITELLM_PROXY_API_BASE=http://192.168.43.179:4001 |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script contains hardcoded credentials (API key) and a specific IP address. This convenience script should not be committed with real credentials or environment-specific configuration. Consider creating a template file (e.g., run.sh.example) instead, or documenting that users should customize these values.
| export LITELLM_MODEL=glm-4.7 | |
| export LITELLM_PROXY_API_KEY=sk-1234 | |
| export LITELLM_PROXY_API_BASE=http://192.168.43.179:4001 | |
| # Default model can be overridden via LITELLM_MODEL | |
| export LITELLM_MODEL="${LITELLM_MODEL:-glm-4.7}" | |
| # Require the API key to be provided via the environment; do not hardcode it here. | |
| if [ -z "${LITELLM_PROXY_API_KEY:-}" ]; then | |
| echo "Error: LITELLM_PROXY_API_KEY is not set. Please export it in your environment before running this script." >&2 | |
| exit 1 | |
| fi | |
| export LITELLM_PROXY_API_KEY | |
| # Default API base can be overridden via LITELLM_PROXY_API_BASE; use localhost by default. | |
| export LITELLM_PROXY_API_BASE="${LITELLM_PROXY_API_BASE:-http://127.0.0.1:4001}" |
| exit | ||
|
|
||
| o use: | ||
|
|
||
| $: uv tool install git+https://github.com/ExpressGradient/laser | ||
| $: laser | ||
|
|
||
| or | ||
| $: uvx --from git+https://github.com/ExpressGradient/laser laser No newline at end of file |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exit statement on line 6 makes all subsequent documentation (lines 8-14) unreachable and non-functional. If this is intentional to prevent accidental execution, the documentation should be moved above the exit or the exit should be removed.
| exit | |
| o use: | |
| $: uv tool install git+https://github.com/ExpressGradient/laser | |
| $: laser | |
| or | |
| $: uvx --from git+https://github.com/ExpressGradient/laser laser | |
| # To use: | |
| # | |
| # $: uv tool install git+https://github.com/ExpressGradient/laser | |
| # $: laser | |
| # | |
| # or | |
| # $: uvx --from git+https://github.com/ExpressGradient/laser laser | |
| exit |
| provider_cls, provider_kwargs = PROVIDER_CONFIGS[provider] | ||
| if provider_cls is Anthropic: | ||
| provider_kwargs = {**provider_kwargs, "default_max_tokens": max_tokens} | ||
| if provider == "litellm": |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LITELLM_MODEL environment variable override behavior should be documented. Add a comment explaining that this allows runtime model override for litellm provider, as this is unique behavior not present in other providers.
| if provider == "litellm": | |
| if provider == "litellm": | |
| # Allow runtime model override specifically for the litellm provider via | |
| # the LITELLM_MODEL environment variable. This override behavior is unique | |
| # to litellm and is not supported by other providers in this CLI. |
Summary
litellmprovider that proxies through a local LiteLLM serverLITELLM_MODELenv var to override model selectionrun.shconvenience scriptTest plan