-
Notifications
You must be signed in to change notification settings - Fork 2
Configuration
How to configure GolemCore Bot.
See also: Dashboard for the web UI, Tools for tool configuration, MCP for MCP settings, Webhooks for webhook configuration, Model Routing for tier and model setup.
There are three main configuration surfaces:
| Surface | File | Best For |
|---|---|---|
| Runtime config | preferences/runtime-config.json |
LLM providers, tools, security, auto mode, RAG, voice -- editable at runtime via dashboard |
| User preferences | preferences/settings.json |
Language, timezone, tier/model overrides, webhook config |
| Spring properties |
application.properties / env vars |
Workspace paths, feature flags, build-time defaults |
The bot stores all state under a base path:
- Spring property:
bot.storage.local.base-path - Docker/JAR env var (via Spring):
STORAGE_PATH
In Docker, you almost always want this mounted:
docker run -d \
-e STORAGE_PATH=/app/workspace \
-v golemcore-bot-data:/app/workspace \
-p 8080:8080 \
golemcore-bot:latestThe easiest way to configure the bot is via the dashboard:
http://localhost:8080/dashboard
See Dashboard for the full guide.
Runtime config is persisted to the workspace:
- File:
preferences/runtime-config.json - Dashboard API:
GET /api/settings/runtime,PUT /api/settings/runtime
Secrets (API keys) can be provided as plain strings in JSON; they are wrapped as Secret internally.
Configure provider API keys under llm.providers. Provider keys must match the provider field in models/models.json.
{
"llm": {
"providers": {
"openai": {
"apiKey": "sk-proj-...",
"baseUrl": null,
"requestTimeoutSeconds": 300
},
"anthropic": {
"apiKey": "sk-ant-..."
}
}
}
}Tier routing is configured in runtime config under modelRouter:
{
"modelRouter": {
"balancedModel": "openai/gpt-5.1",
"balancedModelReasoning": "none",
"smartModel": "openai/gpt-5.1",
"smartModelReasoning": "none",
"codingModel": "openai/gpt-5.2",
"codingModelReasoning": "none",
"deepModel": "openai/gpt-5.2",
"deepModelReasoning": "none",
"dynamicTierEnabled": true,
"temperature": 0.7
}
}Notes:
-
*Reasoningvalues depend on the selected model (seemodels/models.json). - The bot also uses a dedicated
routingModelinternally for some routing/classification flows.
Deep dive: See Model Routing for the full end-to-end flow, dynamic tier upgrades, and debugging tips.
Most tool enablement flags are in runtime-config.json under tools:
{
"tools": {
"filesystemEnabled": true,
"shellEnabled": true,
"browserEnabled": true,
"browserType": "playwright",
"browserHeadless": true,
"browserTimeout": 30000,
"browserUserAgent": "...",
"browserApiProvider": "brave",
"braveSearchEnabled": false,
"braveSearchApiKey": "...",
"skillManagementEnabled": true,
"skillTransitionEnabled": true,
"tierEnabled": true,
"goalManagementEnabled": true,
"imap": {
"enabled": false,
"host": "imap.example.com",
"port": 993,
"username": "user@example.com",
"password": "...",
"security": "ssl",
"sslTrust": "",
"connectTimeout": 10000,
"readTimeout": 30000,
"maxBodyLength": 50000,
"defaultMessageLimit": 20
},
"smtp": {
"enabled": false,
"host": "smtp.example.com",
"port": 587,
"username": "user@example.com",
"password": "...",
"security": "starttls",
"sslTrust": "",
"connectTimeout": 10000,
"readTimeout": 30000
}
}
}See Tools for details on each tool.
The browse tool uses Playwright.
Docker requirements (Chromium sandbox):
docker run -d \
--shm-size=256m \
--cap-add=SYS_ADMIN \
...Input and tool safety settings live in runtime config under security:
{
"security": {
"sanitizeInput": true,
"detectPromptInjection": true,
"detectCommandInjection": true,
"maxInputLength": 10000,
"allowlistEnabled": true,
"toolConfirmationEnabled": false,
"toolConfirmationTimeoutSeconds": 60
}
}{
"rateLimit": {
"enabled": true,
"userRequestsPerMinute": 20,
"userRequestsPerHour": 100,
"userRequestsPerDay": 500,
"channelMessagesPerSecond": 30,
"llmRequestsPerMinute": 60
}
}Automatic history compaction (to prevent context overflow):
{
"compaction": {
"enabled": true,
"maxContextTokens": 50000,
"keepLastMessages": 20
}
}Limits for a single user request (one internal tool loop run):
{
"turn": {
"maxLlmCalls": 200,
"maxToolExecutions": 500,
"deadline": "PT1H"
}
}Telegram channel settings are stored in runtime config under telegram:
{
"telegram": {
"enabled": false,
"token": "123456:ABC-DEF...",
"authMode": "invite_only",
"allowedUsers": []
}
}{
"voice": {
"enabled": false,
"apiKey": "...",
"voiceId": "21m00Tcm4TlvDq8ikWAM",
"ttsModelId": "eleven_multilingual_v2",
"sttModelId": "scribe_v1",
"speed": 1.0,
"telegramRespondWithVoice": false,
"telegramTranscribeIncoming": false
}
}{
"autoMode": {
"enabled": false,
"tickIntervalSeconds": 30,
"taskTimeLimitMinutes": 10,
"autoStart": true,
"maxGoals": 3,
"modelTier": "default",
"notifyMilestones": true
}
}Deep dive: See Auto Mode for the tick cycle, goal/task lifecycle, and diary.
{
"rag": {
"enabled": false,
"url": "http://localhost:9621",
"apiKey": "",
"queryMode": "hybrid",
"timeoutSeconds": 10,
"indexMinLength": 50
}
}Deep dive: See RAG Integration for the indexing/retrieval pipeline and LightRAG server setup.
{
"mcp": {
"enabled": true,
"defaultStartupTimeout": 30,
"defaultIdleTimeout": 5
}
}See MCP for per-skill MCP configuration.
User preferences are stored separately:
- File:
preferences/settings.json
This includes language/timezone/notifications and per-user tier/model overrides.
Webhooks configuration also lives here. See Webhooks for the full guide.
Model capabilities are stored in the workspace at:
models/models.json
On first run, the bot copies a bundled models.json into the workspace. The dashboard can edit and reload models.
See Model Routing for the full models.json reference and resolution logic.
Some settings are still controlled via Spring properties (application config), typically using env vars in Docker:
| Property / Env Var | Description |
|---|---|
STORAGE_PATH |
Workspace base path (default: ~/.golemcore/workspace) |
TOOLS_WORKSPACE |
Sandbox for filesystem/shell tools (default: ~/.golemcore/sandbox) |
DASHBOARD_ENABLED |
Enable/disable the web dashboard |
BOT_MODEL_SELECTION_ALLOWED_PROVIDERS |
Allowed providers in model picker (default: openai,anthropic) |
bot.auto-compact.max-tool-result-chars |
Tool result truncation limit (default: 100000) |
bot.plan.enabled |
Plan mode feature flag |
Default (macOS/Linux): ~/.golemcore/workspace
workspace/
+-- auto/ # auto mode + plan mode state
+-- memory/ # MEMORY.md + daily notes
+-- models/ # models.json (capabilities)
+-- preferences/ # settings.json, runtime-config.json, admin.json
+-- sessions/ # conversation sessions
+-- skills/ # skills (SKILL.md)
+-- usage/ # usage logs
| Endpoint | Method | Description |
|---|---|---|
/api/system/health |
GET | Health check |
/api/system/config |
GET | Active config flags |
/api/system/diagnostics |
GET | Full system diagnostics |
Use /status in chat to check active configuration, including current model tier, enabled tools, rate limit status, and LLM usage (24h).
See also: Dashboard, Tools, MCP, Webhooks, Model Routing
GolemCore Bot -- Apache License 2.0 | GitHub | Issues | Discussions
Getting Started
Core Concepts
Features
Reference
Development