-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Current Situation/Problem
- The
llmscript has no abstraction layer for model backends. - All requests assume an OpenAI-style API (prompt → completion).
- Users running self-hosted models (e.g. LLaMA + Ollama, vLLM, LocalAI) cannot integrate easily.
Proposed Feature
- Backend Abstraction Layer: Introduce a pluggable model provider interface (e.g.,
provider: "openai" | "llama" | "custom"). - Configurable Model Selection: Allow setting the model provider and model name in config or via CLI flags:
Alternatives (Optional)
No response
Additional Information
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels