Type-safe streaming chat backend for Python with a drop-in React UI. Pydantic AI agents, SSE, and conversation persistence — just plug in your agent and go.
Not another ChatGPT clone. You control the LLM, the logic, the data — llmpane handles the streaming infrastructure and UI.
| Use case | Tool |
|---|---|
| "I want a self-hosted ChatGPT" | Open WebUI, LibreChat |
| "I want to build a full AI app from scratch" | Vercel AI SDK, LangChain |
| "I have a Python backend and need a chat UI" | llmpane |
- Type-safe end-to-end - Pydantic models in Python, TypeScript types in React
- SSE streaming - Real-time token streaming that works naturally with FastAPI
- Multimodal support - Send images via paste or file upload for vision-capable LLMs
- Server-side message IDs - Simplifies database storage and conversation history
- Built-in patterns - ActionMessage for confirmations, RefinementMessage for iterative outputs
- Fully customizable - CSS variables for theming, headless mode for complete control
uv add llmpane[fastapi]
# or with pip
pip install llmpane[fastapi]from fastapi import FastAPI
from llmpane import ChatRequest, StreamChunk, create_sse_response
app = FastAPI()
@app.post("/chat")
async def chat(request: ChatRequest):
async def generate():
# Your LLM logic here
yield StreamChunk(delta="Hello ")
yield StreamChunk(delta="world!")
yield StreamChunk(done=True, message_id="msg_123")
return create_sse_response(generate())npm install @llmpane/reactimport { ChatPane } from "@llmpane/react";
import "@llmpane/react/styles";
function App() {
return <ChatPane endpoint="/chat" />;
}| Package | Description |
|---|---|
llmpane |
Python package with Pydantic models and FastAPI utilities |
@llmpane/react |
React component with hooks and pre-built patterns |
Users can send images alongside text for vision-capable LLMs:
// Image upload is enabled by default
<ChatPane endpoint="/chat" allowImageUpload={true} />Frontend features:
- Paste images from clipboard (Cmd/Ctrl+V)
- Click the image button to select files
- Preview attached images before sending
- Remove individual images with the X button
Backend handling with Pydantic AI:
from llmpane.agent import ChatSession
from pydantic_ai import Agent
# Use a vision-capable model
agent = Agent("google-gla:gemini-2.0-flash")
session = ChatSession(agent=agent)
@app.post("/chat")
async def chat(request: ChatRequest):
# session.run() automatically converts ImagePart to BinaryContent
return create_sse_response(
session.run(request.conversation_id, request.message)
)Message content types:
from llmpane.models import MessageContent, TextPart, ImagePart
# String for text-only (backward compatible)
message: MessageContent = "Hello, world!"
# List of parts for multimodal
message: MessageContent = [
ImagePart(type="image", data="base64...", media_type="image/png"),
TextPart(type="text", text="What's in this image?"),
]For "LLM proposes, user confirms" flows:
<ActionMessage
message={message}
onAccept={(action) => applyFilter(action)}
onReject={(action) => console.log("Rejected")}
/>For iterative structured output:
<RefinementMessage
message={message}
onAccept={(query) => executeQuery(query)}
/>See the examples/fastapi-example directory for a complete working demo.
# Clone and navigate to the example
cd examples/fastapi-example
# Install Python dependencies and start the backend
uv sync
uv run python main.py
# Or run on a different port
PORT=8001 uv run python main.py# In another terminal, start the frontend
cd examples/fastapi-example/frontend
npm install
npm run dev
# If the backend is on a different port, set BACKEND_PORT
BACKEND_PORT=8001 npm run devThen open http://localhost:5173 to see the demo.
# Python package
cd packages/llmpane-py
uv sync --all-extras
uv run pytest -v
# React package
cd packages/llmpane-react
npm install
npm run test:run# Python
cd packages/llmpane-py
uv run ruff check llmpane
uv run mypy llmpane
# React
cd packages/llmpane-react
npm run lintBoth packages use GitHub Actions for automated publishing.
- Update the version in
packages/llmpane-py/pyproject.tomland/orpackages/llmpane-react/package.json - Create a GitHub Release with a tag matching the version (e.g.,
v0.1.0) - The workflows will automatically build, test, and publish to PyPI and npm
You can also trigger publishing manually via the GitHub Actions "Run workflow" button:
- Python: Go to Actions > "Publish Python Package" > Run workflow
- Option to publish to TestPyPI first for validation
- React: Go to Actions > "Publish React Package" > Run workflow
- Supports
latest,beta, andnextnpm tags - Dry run option to validate without publishing
- Supports
Configure these in your repository settings under Settings > Secrets and variables > Actions:
| Secret | Description |
|---|---|
NPM_TOKEN |
npm automation token with publish access |
For PyPI, we use Trusted Publishing with OIDC. Configure the GitHub publisher in your PyPI project settings:
- Owner:
HartBrook - Repository:
llmpane - Workflow:
publish-python.yml - Environment:
pypi(ortestpypifor TestPyPI)
Create these environments in Settings > Environments:
| Environment | Used For |
|---|---|
pypi |
Production PyPI publishing |
testpypi |
TestPyPI publishing (optional) |
npm |
npm publishing |
MIT