Skip to content

qduc/chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ChatForge

A modern AI chat application with advanced tool orchestration and OpenAI-compatible API proxy

License Node.js Docker

ChatForge is a full-stack AI chat application featuring a Next.js 15 frontend and Node.js backend. It acts as an OpenAI-compatible API proxy with enhanced capabilities including conversation persistence, server-side tool orchestration, multi-provider support, model comparison mode, conversation forking, and cross-platform desktop app support.

Screen.Recording.2026-01-08.at.23.48.46.mov

Features

Core Capabilities

  • πŸ€– Server-Side Tool Orchestration - Unified tool calling with iterative workflows, thinking support, parallel execution, and intelligent error handling
  • πŸ’¬ Real-Time Streaming - Server-Sent Events (SSE) with tool execution visibility and abort support
  • πŸ’Ύ Conversation Persistence - SQLite-backed storage with automatic retention cleanup and migration system
  • πŸ”Œ Multi-Provider Support - OpenAI-compatible interface with OpenAI, Anthropic, and Gemini providers
  • 🎨 Modern UI - React 19 with markdown rendering, syntax highlighting, code wrapping, HTML preview, and responsive design
  • πŸ—‚οΈ Prompt Management - Built-in and custom system prompts with conversation-aware selection

Advanced Features

  • πŸ”€ Model Comparison Mode - Side-by-side comparison of multiple models with isolated conversation histories
  • 🍴 Conversation Forking - Fork conversations at any message to explore alternative paths
  • ⚑ Parallel Tool Execution - Configurable concurrent tool execution for improved performance
  • 🌐 Enhanced WebFetch - Playwright-based browser automation with SPA support and specialized extractors for Reddit, StackOverflow
  • πŸ”„ Streaming Control - Abort streaming responses with automatic checkpoint persistence
  • πŸ’Ύ Draft Persistence - Automatic draft message saving across sessions
  • πŸ–₯️ Desktop App - Cross-platform Electron app with auto-login and native packaging
  • πŸ”— Linked Conversations - Support for conversation linking and retrieval in context

Infrastructure & Security

  • 🐳 Docker Ready - Development and production Docker configurations with hot reload support
  • πŸ” Authentication & User Management - JWT-based authentication with registration, login, and refresh tokens
  • πŸ‘€ User-Scoped Multi-Tenancy - Per-user provider configuration with isolated conversations and settings
  • πŸ” Retry Logic - Exponential backoff for API calls with configurable retry strategy
  • βœ… Code Quality - Husky pre-commit hooks enforce linting before commits
  • πŸ”” Toast Notifications - User-facing notifications for errors and success messages

AI Capabilities

  • πŸ–ΌοΈ Image Upload & Vision Support - Multimodal vision support with drag-and-drop UI
  • πŸŽ™οΈ Audio Upload Support - Upload and send audio files for voice-enabled models
  • πŸ“Ž File Attachment Support - Text file upload with content extraction
  • 🧠 Reasoning Controls - Support for reasoning effort and extended thinking modes
  • πŸ’Ύ Prompt Caching Optimization - Automatic cache breakpoints to reduce token costs
  • πŸ““ Journal Tool - Persistent memory tool for cross-conversation AI memory
  • 🎯 Model Caching - Background refresh and batch fetching for optimal performance

Prerequisites

  • Node.js 18 or higher
  • Docker and Docker Compose (for containerized deployment)
  • An OpenAI (or compatible) API key that you'll enter through Settings β†’ Providers & Tools

Quick Start

Option 1: One-Click Docker Hub Deployment (Recommended)

Pull pre-built images from Docker Hub - no cloning required:

With Docker Compose:

Download it from this repository

curl -O https://raw.githubusercontent.com/qduc/chat/main/docker-compose.yml

or create it manually

services:
  app:
    image: qduc/chat:latest
    environment:
      - IMAGE_STORAGE_PATH=/data/images
      - FILE_STORAGE_PATH=/data/files
      - DB_URL=file:/data/prod.db
    volumes:
      - chatforge_data:/data
      - chatforge_logs:/app/logs
    ports:
      - "${PORT:-3000}:3000"
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "node", "-e", "fetch('http://127.0.0.1:3000/health').then(res => { if (res.ok) process.exit(0); process.exit(1); }).catch(() => process.exit(1));"]
      interval: 30s
      timeout: 10s
      start_period: 30s
      retries: 3

volumes:
  chatforge_data:
    driver: local
  chatforge_logs:
    driver: local

Then run:

# Start the stack
docker compose up -d

Or with Docker run (one-liner):

docker run -d --name chatforge -p 3000:3000 -v chatforge_data:/data -v chatforge_logs:/app/logs -e DB_URL=file:/data/prod.db qduc/chat:latest

Visit http://localhost:3000, register your first user, then open Settings β†’ Providers & Tools to enter your API key and base URL.

The production compose file now runs a single app service built from the root multi-stage Dockerfile. That container bundles the Express API, the exported Next.js UI, and the static asset server, so there is no longer a separate frontend or nginx proxy to operate in production.

Optional infrastructure config (add to .env file):

JWT_SECRET=your-secret-here        # Overrides auto-generated secret
PORT=3000                          # External port (default: 3000)

Option 2: Docker Development (with hot reload)

# Clone the repository
git clone https://github.com/qduc/chat.git && cd chat

# Copy environment files
cp backend/.env.example backend/.env
# Edit backend/.env and set JWT_SECRET

# Start with hot reload
./dev.sh up --build

# Follow logs
./dev.sh logs -f

Visit http://localhost:3003. The development compose file still runs dedicated frontend, backend, and proxy containers to keep hot reload fast, but production images collapse into a single runtime service.

For alternative setup options, see docs/INSTALLATION.md.

Documentation

Quick reference:

Project Structure

chat/
β”œβ”€β”€ frontend/          # Next.js 15 + React 19 + TypeScript
β”œβ”€β”€ backend/           # Node.js + Express + SQLite
β”œβ”€β”€ electron/          # Electron desktop app packaging
β”œβ”€β”€ docs/              # Technical documentation
β”œβ”€β”€ proxy/             # Dev-only Nginx reverse proxy config
β”œβ”€β”€ integration/       # Integration tests
β”œβ”€β”€ requests/          # HTTP request examples
β”œβ”€β”€ dev.sh             # Development orchestration
β”œβ”€β”€ prod.sh            # Production management
└── release.sh         # Release management

Testing

./dev.sh test              # Run all tests
./dev.sh test:backend      # Backend tests only
./dev.sh test:frontend     # Frontend tests only

Deployment Architecture

  • Production (docker-compose.yml, prod.sh) – Single app container generated by the top-level Dockerfile. The multi-stage build compiles the Next.js frontend to a static export and copies it into the Express backend, which serves both /api and the UI while persisting data/logs under /data.
  • Development (docker-compose.dev.yml, dev.sh) – Dedicated frontend, backend, proxy, and adminer services for fast iteration with hot reload. The nginx proxy that provides the http://localhost:3003 origin only exists in this dev stack.

Contributing

Contributions are welcome! Please follow these guidelines:

  1. Follow existing code patterns and conventions
  2. Write tests for new features
  3. Run linting: ./dev.sh exec backend npm run lint
  4. Ensure all tests pass: ./dev.sh test
  5. Update documentation as needed

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

A simple AI chat frontend if you think OpenWebUI is too bloated

Topics

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •