Skip to content

[Feature]: LLM – Support for Multiple Model #243

@wnghdcjfe

Description

@wnghdcjfe

Current Situation/Problem

  • The llm script has no abstraction layer for model backends.
  • All requests assume an OpenAI-style API (prompt → completion).
  • Users running self-hosted models (e.g. LLaMA + Ollama, vLLM, LocalAI) cannot integrate easily.

Proposed Feature

  • Backend Abstraction Layer: Introduce a pluggable model provider interface (e.g., provider: "openai" | "llama" | "custom").
  • Configurable Model Selection: Allow setting the model provider and model name in config or via CLI flags:

Alternatives (Optional)

No response

Additional Information

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions