Minimal ChatGPT-like chat UI for local / home network use (no built-in auth).
- Next.js app + local SQLite database.
- Model/provider metadata via
modelsdev. - Optional tools: skills (
.skills/), web tools, sandboxed bash tools, Philips Hue control (Hue Gateway v2).
Prereqs:
- Node + npm
modelsdevinstalled and onPATH:modelsdev --version
- Create a config:
cp config.toml.example config.toml
- Set at least one provider API key in your shell:
export VERCEL_AI_GATEWAY_API_KEY=...export OPENCODE_API_KEY=...(if your active provider is OpenCode Zen; if you getunsupported_country_region_territory, switch providers or changebase_url)
- Install deps + run:
npm installnpm run dev
- Open
http://localhost:3000(or your machine’s LAN IP).
Prereqs:
- Docker Engine + docker compose v2
modelsdevinstalled on the host (mounted into the container)
- Create a Docker config:
cp config.docker.toml.example config.docker.toml
- Create
.envfrom.env.exampleand set at least:
REMCOCHAT_ADMIN_TOKEN=$(openssl rand -hex 32)(required for LAN access + sandboxd auth)REMCOCHAT_ENABLE_BASH_TOOL=1(required if you want bash tools / sandboxd)REMCOCHAT_CONFIG_TOML=./config.docker.tomlREMCOCHAT_MODELSDEV_CLI_HOST_DIR=/path/to/modelsdev(must containbin/run.jsorbin/modelsdev)- Plus your provider keys (
VERCEL_AI_GATEWAY_API_KEYand/orOPENCODE_API_KEY)
- Start:
scripts/start-remcochat.sh --build- Optional reverse proxy:
scripts/start-remcochat.sh --proxy
- Open:
- Host-only:
http://127.0.0.1:3100 - With proxy:
https://<host>/remcochat/
Notes:
- In
docker-compose.yml, RemcoChat port3100is bound to127.0.0.1by default; use the proxy (or change compose ports) for LAN access. sandboxdis private to the Docker network by default; RemcoChat talks to it viahttp://sandboxd:8080.
The optional nginx proxy is defined by:
docker-compose.proxy.ymlnginx/remcochat.conf(updateserver_name/ hostnames for your environment)
Generate local CA + TLS certs:
scripts/generate-proxy-cert.sh
CA download endpoints (served by the proxy):
https://<host>/remcochat-ca.cerhttps://<host>/remcochat-ca.mobileconfig(iOS)
- Local:
config.toml.example - Docker:
config.docker.toml.example - Override config path:
REMCOCHAT_CONFIG_PATH=/abs/path/to/config.toml
- Configure in
app.web_tools.search_provider:"exa"(requiresEXA_API_KEY)"brave"(requiresBRAVE_SEARCH_API)
- This setting controls which web search tool is exposed for
openai_compatiblemodels. - This setting also controls which web search tool is exposed for
xaimodels.
Hue control is implemented via:
- skill:
.skills/hue-instant-control - server tool:
hueGateway(when enabled via config/access policy)
Docs: docs/integrations/hue/hue.md
Train travel data is exposed through:
- server tool:
ovNlGateway(stations, departures, arrivals, trips, journey detail, disruptions) - optional skill:
.skills/ov-nl-travel(explicit/ov-nl-travel ...activation) - UI card: NS-style OV card rendered from
tool-ovNlGatewayoutput
Setup:
- Enable in
config.toml:
[app.ov_nl]enabled = trueaccess = "localhost"(or"lan"with admin-token policy)base_urls = ["https://gateway.apiportal.ns.nl/reisinformatie-api"]subscription_key_env = "NS_APP_SUBSCRIPTION_KEY"
- Export your NS subscription key:
export NS_APP_SUBSCRIPTION_KEY='...'
Notes:
- No runtime dependency on
NS-App-API-keys.md; only environment variables are used at runtime. - Responses are normalized and cached with short TTL capped by
app.ov_nl.cache_max_ttl_seconds.
This project is intended for trusted machines / LAN. There is no built-in auth.
Bash tools are disabled by default and require:
app.bash_tools.enabled = truein configREMCOCHAT_ENABLE_BASH_TOOL=1at runtime- if
access="lan": the browser must sendx-remcochat-admin-tokenmatchingREMCOCHAT_ADMIN_TOKEN(set in the UI)
Implementation details: docs/agents/bash-tools.md
Admin endpoints are disabled by default; enable with REMCOCHAT_ENABLE_ADMIN=1.
- SQLite database defaults to
data/remcochat.sqlite(override withREMCOCHAT_DB_PATH).
- Unit:
npm run test:unit - E2E (opt-in):
npm run test:e2e(install WebKit:npx playwright install webkit)
Testing policy: docs/agents/testing.md