Skip to content

Conversation

@metodn
Copy link

@metodn metodn commented Jan 14, 2026

Summary

  • Add new litellm provider that proxies through a local LiteLLM server
  • Provider reads LITELLM_MODEL env var to override model selection
  • Adds run.sh convenience script

Test plan

  • Tested with LiteLLM proxy running locally
  • Verified environment variable override works correctly

Add new litellm provider that proxies through a local LiteLLM server.
This allows using models from various providers through a unified interface.
The provider reads LITELLM_MODEL env var to override model selection.
Copilot AI review requested due to automatic review settings January 14, 2026 17:49
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for LiteLLM as a model provider, enabling Laser to proxy requests through a local LiteLLM server. The integration follows the existing provider pattern and includes environment variable configuration for flexible deployment.

Changes:

  • Added litellm provider configuration using OpenAILegacy with customizable base URL and API key
  • Implemented environment variable override for model selection via LITELLM_MODEL
  • Added convenience script run.sh for local testing with LiteLLM
  • Modified .gitignore to exclude all markdown files

Reviewed changes

Copilot reviewed 2 out of 3 changed files in this pull request and generated 5 comments.

File Description
laser/main.py Adds LiteLLM provider configuration and environment variable-based model override logic
run.sh Provides convenience script for running Laser with LiteLLM configuration
.gitignore Adds pattern to ignore all markdown files

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


exit

o use:
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incomplete text. Line 8 appears to be missing the first character. Should be 'To use:' instead of 'o use:'.

Suggested change
o use:
To use:

Copilot uses AI. Check for mistakes.
$: uv tool install git+https://github.com/ExpressGradient/laser
$: laser

or
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing capital letter. Should be 'Or' instead of 'or' for proper sentence structure.

Suggested change
or
Or

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +3
export LITELLM_MODEL=glm-4.7
export LITELLM_PROXY_API_KEY=sk-1234
export LITELLM_PROXY_API_BASE=http://192.168.43.179:4001
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script contains hardcoded credentials (API key) and a specific IP address. This convenience script should not be committed with real credentials or environment-specific configuration. Consider creating a template file (e.g., run.sh.example) instead, or documenting that users should customize these values.

Suggested change
export LITELLM_MODEL=glm-4.7
export LITELLM_PROXY_API_KEY=sk-1234
export LITELLM_PROXY_API_BASE=http://192.168.43.179:4001
# Default model can be overridden via LITELLM_MODEL
export LITELLM_MODEL="${LITELLM_MODEL:-glm-4.7}"
# Require the API key to be provided via the environment; do not hardcode it here.
if [ -z "${LITELLM_PROXY_API_KEY:-}" ]; then
echo "Error: LITELLM_PROXY_API_KEY is not set. Please export it in your environment before running this script." >&2
exit 1
fi
export LITELLM_PROXY_API_KEY
# Default API base can be overridden via LITELLM_PROXY_API_BASE; use localhost by default.
export LITELLM_PROXY_API_BASE="${LITELLM_PROXY_API_BASE:-http://127.0.0.1:4001}"

Copilot uses AI. Check for mistakes.
Comment on lines +6 to +14
exit

o use:

$: uv tool install git+https://github.com/ExpressGradient/laser
$: laser

or
$: uvx --from git+https://github.com/ExpressGradient/laser laser No newline at end of file
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exit statement on line 6 makes all subsequent documentation (lines 8-14) unreachable and non-functional. If this is intentional to prevent accidental execution, the documentation should be moved above the exit or the exit should be removed.

Suggested change
exit
o use:
$: uv tool install git+https://github.com/ExpressGradient/laser
$: laser
or
$: uvx --from git+https://github.com/ExpressGradient/laser laser
# To use:
#
# $: uv tool install git+https://github.com/ExpressGradient/laser
# $: laser
#
# or
# $: uvx --from git+https://github.com/ExpressGradient/laser laser
exit

Copilot uses AI. Check for mistakes.
provider_cls, provider_kwargs = PROVIDER_CONFIGS[provider]
if provider_cls is Anthropic:
provider_kwargs = {**provider_kwargs, "default_max_tokens": max_tokens}
if provider == "litellm":
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LITELLM_MODEL environment variable override behavior should be documented. Add a comment explaining that this allows runtime model override for litellm provider, as this is unique behavior not present in other providers.

Suggested change
if provider == "litellm":
if provider == "litellm":
# Allow runtime model override specifically for the litellm provider via
# the LITELLM_MODEL environment variable. This override behavior is unique
# to litellm and is not supported by other providers in this CLI.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant