Skip to content

Type-safe streaming chat backend for Python with a drop-in React UI. Pydantic AI agents, SSE, and conversation persistence — just plug in your agent and go.

License

Notifications You must be signed in to change notification settings

HartBrook/llmpane

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmpane

Type-safe streaming chat backend for Python with a drop-in React UI. Pydantic AI agents, SSE, and conversation persistence — just plug in your agent and go.

Not another ChatGPT clone. You control the LLM, the logic, the data — llmpane handles the streaming infrastructure and UI.

When to use llmpane vs. alternatives

Use case Tool
"I want a self-hosted ChatGPT" Open WebUI, LibreChat
"I want to build a full AI app from scratch" Vercel AI SDK, LangChain
"I have a Python backend and need a chat UI" llmpane

Features

  • Type-safe end-to-end - Pydantic models in Python, TypeScript types in React
  • SSE streaming - Real-time token streaming that works naturally with FastAPI
  • Multimodal support - Send images via paste or file upload for vision-capable LLMs
  • Server-side message IDs - Simplifies database storage and conversation history
  • Built-in patterns - ActionMessage for confirmations, RefinementMessage for iterative outputs
  • Fully customizable - CSS variables for theming, headless mode for complete control

Quick Start

Python Backend

uv add llmpane[fastapi]
# or with pip
pip install llmpane[fastapi]
from fastapi import FastAPI
from llmpane import ChatRequest, StreamChunk, create_sse_response

app = FastAPI()

@app.post("/chat")
async def chat(request: ChatRequest):
    async def generate():
        # Your LLM logic here
        yield StreamChunk(delta="Hello ")
        yield StreamChunk(delta="world!")
        yield StreamChunk(done=True, message_id="msg_123")

    return create_sse_response(generate())

React Frontend

npm install @llmpane/react
import { ChatPane } from "@llmpane/react";
import "@llmpane/react/styles";

function App() {
  return <ChatPane endpoint="/chat" />;
}

Packages

Package Description
llmpane Python package with Pydantic models and FastAPI utilities
@llmpane/react React component with hooks and pre-built patterns

Patterns

Image Input (Multimodal)

Users can send images alongside text for vision-capable LLMs:

// Image upload is enabled by default
<ChatPane endpoint="/chat" allowImageUpload={true} />

Frontend features:

  • Paste images from clipboard (Cmd/Ctrl+V)
  • Click the image button to select files
  • Preview attached images before sending
  • Remove individual images with the X button

Backend handling with Pydantic AI:

from llmpane.agent import ChatSession
from pydantic_ai import Agent

# Use a vision-capable model
agent = Agent("google-gla:gemini-2.0-flash")
session = ChatSession(agent=agent)

@app.post("/chat")
async def chat(request: ChatRequest):
    # session.run() automatically converts ImagePart to BinaryContent
    return create_sse_response(
        session.run(request.conversation_id, request.message)
    )

Message content types:

from llmpane.models import MessageContent, TextPart, ImagePart

# String for text-only (backward compatible)
message: MessageContent = "Hello, world!"

# List of parts for multimodal
message: MessageContent = [
    ImagePart(type="image", data="base64...", media_type="image/png"),
    TextPart(type="text", text="What's in this image?"),
]

ActionMessage

For "LLM proposes, user confirms" flows:

<ActionMessage
  message={message}
  onAccept={(action) => applyFilter(action)}
  onReject={(action) => console.log("Rejected")}
/>

RefinementMessage

For iterative structured output:

<RefinementMessage
  message={message}
  onAccept={(query) => executeQuery(query)}
/>

Examples

See the examples/fastapi-example directory for a complete working demo.

# Clone and navigate to the example
cd examples/fastapi-example

# Install Python dependencies and start the backend
uv sync
uv run python main.py

# Or run on a different port
PORT=8001 uv run python main.py
# In another terminal, start the frontend
cd examples/fastapi-example/frontend
npm install
npm run dev

# If the backend is on a different port, set BACKEND_PORT
BACKEND_PORT=8001 npm run dev

Then open http://localhost:5173 to see the demo.

Documentation

Development

Running Tests

# Python package
cd packages/llmpane-py
uv sync --all-extras
uv run pytest -v

# React package
cd packages/llmpane-react
npm install
npm run test:run

Linting

# Python
cd packages/llmpane-py
uv run ruff check llmpane
uv run mypy llmpane

# React
cd packages/llmpane-react
npm run lint

Publishing

Both packages use GitHub Actions for automated publishing.

Automatic Publishing (Recommended)

  1. Update the version in packages/llmpane-py/pyproject.toml and/or packages/llmpane-react/package.json
  2. Create a GitHub Release with a tag matching the version (e.g., v0.1.0)
  3. The workflows will automatically build, test, and publish to PyPI and npm

Manual Publishing

You can also trigger publishing manually via the GitHub Actions "Run workflow" button:

  • Python: Go to Actions > "Publish Python Package" > Run workflow
    • Option to publish to TestPyPI first for validation
  • React: Go to Actions > "Publish React Package" > Run workflow
    • Supports latest, beta, and next npm tags
    • Dry run option to validate without publishing

Required Secrets

Configure these in your repository settings under Settings > Secrets and variables > Actions:

Secret Description
NPM_TOKEN npm automation token with publish access

For PyPI, we use Trusted Publishing with OIDC. Configure the GitHub publisher in your PyPI project settings:

  • Owner: HartBrook
  • Repository: llmpane
  • Workflow: publish-python.yml
  • Environment: pypi (or testpypi for TestPyPI)

Environments

Create these environments in Settings > Environments:

Environment Used For
pypi Production PyPI publishing
testpypi TestPyPI publishing (optional)
npm npm publishing

License

MIT

About

Type-safe streaming chat backend for Python with a drop-in React UI. Pydantic AI agents, SSE, and conversation persistence — just plug in your agent and go.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published