Synthetic Soul is an experimental artificial intelligence project designed to simulate human-like emotions, thought patterns, and relationship dynamics. Its purpose is to create a digital mind that not only responds to user input but also develops an evolving personality, one that reflects emotional depth, personal biases, and individualized sentiments toward different users shaped by unique experiences.
The AI is named Jasmine, short for Just a Simulation Modeling Interactive Neural Engagement, to reflect both its experimental nature and its focus on simulating authentic engagement.
- Emotion simulation with decay, reinforcement, and contextual shifts
- Rich and Lite personality schemas for lightweight or deeper simulation
- Persistent memory and relationship context backed by MongoDB
- Autonomous thinking loops independent of direct user prompts
- Relationship dynamics that evolve based on interaction history
- Async message processing with Redis + RQ workers
- Structured LLM integration with OpenAI (hosted) and Ollama (local) providers
Jasmine explores affective computing and digital companionship by blending artificial intelligence with principles from psychology and human relationship studies. The goal is not just interaction but evolution — an AI that grows and adapts with users over time.
Synthetic Soul API is a FastAPI service that powers the runtime backend with:
- authenticated guest/user sessions
- async message processing via Redis + RQ
- persistent memory/state in MongoDB
- optional LLM backends (OpenAI, Ollama)
This document is a full setup and operations guide for local development on macOS, Linux, or Windows.
Current version: 1.1.0
Versioning policy:
- URL versioning uses the major version (
/v1/...) - Breaking changes require a major bump (
v2) - Non-breaking additions/fixes use minor/patch bumps (
1.1.x,1.2.x)
Runtime version metadata:
GET /v1/meta/versionX-API-Versionresponse header on all API responses
GET /v1/-> active agent nameGET /v1/meta/ping-> liveness checkGET /v1/meta/version-> semver + versioning metadataGET /v1/meta/queue-> Redis queue + worker diagnosticsGET /v1/meta/llm-> active LLM mode/provider/model diagnosticsPOST /v1/auth/guest-> create guest session + access tokenPOST /v1/auth/login-> login with email/passwordPOST /v1/auth/claim-> convert guest account to password accountPOST /v1/auth/refresh-> rotate refresh/access tokensPOST /v1/auth/logout-> revoke current sessionGET /v1/auth/me-> current identity claimsPOST /v1/messages/submit-> enqueue async response jobGET /v1/jobs/{job_id}-> poll job status/resultGET /v1/jobs/{job_id}/events-> SSE job progress/status streamGET /v1/messages/conversation-> current conversationGET /v1/agents/active-> active agent stateGET /v1/thoughts/latest-> latest thought
- API server: FastAPI (
app/main.py) - Worker: RQ worker (
python -m app.worker) - Queue transport: Redis
- Persistence: MongoDB
- Background loops: emotional decay + periodic thinking
- Python
3.10+ - Redis
6+ - MongoDB
6+ - Optional: Docker Desktop (recommended for cross-platform local infra)
Use this for Linux server deployment. Two compose modes are available:
Runs api + worker + reverse proxy and expects Redis/Mongo to exist outside this compose project.
docker compose -f docker-compose.api.yml up -d --builddocker-compose.api.yml reads env vars from .env.
If Redis/Mongo run directly on the Linux host, use:
MONGO_MODE=local
MONGO_CONNECTION_LOCAL=mongodb://host.docker.internal:27017
REDIS_URL=redis://host.docker.internal:6379/0(host.docker.internal is mapped in this file using extra_hosts.)
Runs everything in containers (including reverse proxy + data services + persistent volumes):
docker compose up -d --builddocker-compose.yml automatically wires:
- API + worker to
redis://redis:6379/0 - API + worker to
mongodb://mongo:27017 - persistent volumes (
redis_data,mongo_data,caddy_data,caddy_config)
For this mode, keep .env focused on app settings/secrets (for example LLM keys, DATABASE_NAME, JWT/Argon2 secrets).
- Internet traffic enters through Caddy on ports
80/443. - Caddy reverse-proxies requests to internal service
api:8000. - TLS certificates are automatic via Let's Encrypt when:
API_DOMAINpoints to your server public IP via DNS- ports
80and443are open in firewall/security group
- Set these in
.env:
API_DOMAIN=api.example.comFor local-only testing without a public domain, set:
API_DOMAIN=localhost- Copy from
.env.production.exampleto.envand fill secrets/URIs. - Keep
APP_ENV=productionandDEBUG_MODE=false.
Use a shared external Docker network so the bot can call the API privately via http://api:8000.
- Create the network once on the server:
docker network create synthetic-soul-shared- Start this API stack (it now joins
synthetic-soul-sharedautomatically):
docker compose up -d --build- In the bot project, attach the bot service to the same external network and set its API base URL to
http://api:8000.- Example file:
docs/discord-bot-compose.example.yml - If needed, override network name with
SHARED_DOCKER_NETWORKin.env.
- Example file:
docker compose ps
docker logs -f synthetic-soul-proxy
docker logs -f synthetic-soul-api
docker logs -f synthetic-soul-worker
curl -k https://127.0.0.1/v1/meta/pingTo stop either mode:
docker compose down
docker compose -f docker-compose.api.yml downDocker (recommended, same on macOS/Linux/Windows):
docker run -d --name redis-stack -p 6379:6379 redis/redis-stack:latest
docker run -d --name mongo -p 27017:27017 -v mongo_data:/data/db mongo:7macOS/Linux:
cd SyntheticSoulAPI
python3 -m venv .venv
source .venv/bin/activateWindows PowerShell:
cd SyntheticSoulAPI
py -m venv .venv
.\.venv\Scripts\Activate.ps1Windows Command Prompt:
cd SyntheticSoulAPI
py -m venv .venv
.venv\Scripts\activate.batpip install -r requirements.txtMinimum local .env (safe template values, replace keys):
APP_ENV=development
BOT_NAME=jasmine
MODE=lite
LLM_MODE=hosted
MONGO_MODE=local
MONGO_CONNECTION_LOCAL=mongodb://127.0.0.1:27017
# Optional hosted Mongo URI for easy switching:
MONGO_CONNECTION_HOSTED=mongodb+srv://<user>:<pass>@<cluster>/<db>?retryWrites=true&w=majority
# Legacy fallback (still supported):
MONGO_CONNECTION=mongodb://127.0.0.1:27017
DATABASE_NAME=synthetic_soul
REDIS_URL=redis://127.0.0.1:6379/0
OPENAI_API_KEY=replace_me
GPT_FAST_MODEL=gpt-4o-mini
GPT_QUALITY_MODEL=gpt-5-mini
# Local mode (Ollama, OpenAI-compatible API)
OLLAMA_BASE_URL=http://127.0.0.1:11434/v1
OLLAMA_API_KEY=ollama
OLLAMA_FAST_MODEL=qwen2.5:7b
OLLAMA_QUALITY_MODEL=qwen2.5:14b
JWT_SECRET_ENV=replace_with_long_random_secret
ARGON2_PEPPER_ENV=replace_with_long_random_pepper
WEB_UI_DOMAIN=http://127.0.0.1:5173
DEBUG_MODE=trueIf you want to run LLM calls locally instead of OpenAI:
- Install Ollama.
- Start Ollama:
ollama serve- Pull your chosen models (examples):
ollama pull qwen2.5:7b
ollama pull qwen2.5:14b- Set
.envfor local mode:
LLM_MODE=local
MONGO_MODE=local
OLLAMA_BASE_URL=http://127.0.0.1:11434/v1
OLLAMA_API_KEY=ollama
OLLAMA_FAST_MODEL=qwen2.5:7b
OLLAMA_QUALITY_MODEL=qwen2.5:14b- Restart API + worker so new env values are loaded.
- Verify configuration:
curl http://127.0.0.1:11434/api/tags
curl http://127.0.0.1:8000/v1/meta/llmGenerate strong secrets quickly:
python -c "import secrets; print(secrets.token_urlsafe(64))"macOS/Linux:
./.venv/bin/uvicorn app.main:app --reloadWindows:
.\.venv\Scripts\python -m uvicorn app.main:app --reloadmacOS/Linux:
./.venv/bin/python -m app.workerWindows:
.\.venv\Scripts\python -m app.workerNotes:
- On macOS and Windows, worker defaults to
SimpleWorkermode to avoidfork()issues. - To force classic forking worker on fork-capable platforms (e.g., Linux):
RQ_USE_FORK_WORKER=true.
curl http://127.0.0.1:8000/v1/meta/ping
curl http://127.0.0.1:8000/v1/meta/version
curl http://127.0.0.1:8000/v1/meta/queue
curl http://127.0.0.1:8000/v1/meta/llmOpenAPI UI:
- Swagger:
http://127.0.0.1:8000/docs - ReDoc:
http://127.0.0.1:8000/redoc
Auth supports guest-first sessions and password accounts.
- Access token: JWT in
Authorization: Bearer ... - Refresh token: cookie-bound, rotated on refresh
POST /v1/auth/refresh requires:
- refresh cookies (
sid,rtoken) X-CSRF-Tokenheader matchingrefresh_csrfcookie
This prevents cross-site refresh abuse while keeping browser-cookie refresh flow.
POST /v1/auth/guestExample response:
{
"access_token": "...",
"username": "guest_xxx",
"expires_in": 900
}POST /v1/messages/submit
Authorization: Bearer <access_token>
Content-Type: application/json
{
"message": "Good morning",
"type": "dm"
}Example response (202 Accepted):
{
"job_id": "uuid",
"status": "queued"
}GET /v1/jobs/{job_id}
Authorization: Bearer <access_token>Status values: queued, running, succeeded, failed
For long-running local-model jobs, use SSE to get push updates instead of frequent polling:
GET /v1/jobs/{job_id}/events?access_token=<access_token>SSE event types:
progress-> progress updates from Redis pub/sub (job:{job_id})status-> normalized job status snapshotsdone-> terminal status (succeededorfailed)
Recommended client flow:
POST /v1/messages/submitto getjob_id- Open
EventSourceon/v1/jobs/{job_id}/events - On
done, make one finalGET /v1/jobs/{job_id}to fetch canonicalresult - Fall back to polling if SSE disconnects
GET /v1/meta/queue returns:
- worker count and worker states
- queue backlog per queue (
high/default/low) - total backlog
- redis connectivity signal
Use this endpoint first when jobs remain queued.
Symptoms:
POST /v1/messages/submitreturns202GET /v1/jobs/{id}returns200repeatedly withstatus=queued
Checks:
- Ensure worker process is running.
- Check
GET /v1/meta/queue:worker_countshould be> 0total_backlogshould decrease over time
If you see ObjC initialize/fork crash logs, run worker with current default (SimpleWorker) via:
./.venv/bin/python -m app.workerUse the standard worker entrypoint (it now selects SimpleWorker and Windows-safe timeout handling automatically):
.\.venv\Scripts\python -m app.workerDuring initial app boot, client may request protected resources before guest token acquisition. This is transient and expected.
- Ensure browser sends cookies (
credentials: include) - Ensure client sets
X-CSRF-Tokenfromrefresh_csrfcookie - Ensure client and API origins are configured in CORS
Ensure expression assets exist in app/assets/expressions/<BOT_NAME>/ and names match expression strings.
Required for local runtime:
DATABASE_NAMEJWT_SECRET_ENVARGON2_PEPPER_ENV
Optional/commonly used:
MONGO_MODE(hostedorlocal, default:hosted)MONGO_CONNECTION_LOCAL(used whenMONGO_MODE=local, default:mongodb://127.0.0.1:27017)MONGO_CONNECTION_HOSTED(used whenMONGO_MODE=hosted)MONGO_CONNECTION(legacy fallback if mode-specific URI is not set)LLM_MODE(hostedorlocal, default:hosted)- Hosted mode:
OPENAI_API_KEYGPT_FAST_MODELGPT_QUALITY_MODEL
- Local mode (Ollama):
OLLAMA_BASE_URL(default:http://127.0.0.1:11434/v1)OLLAMA_API_KEY(default:ollama)OLLAMA_FAST_MODELOLLAMA_QUALITY_MODEL
REDIS_URL(defaults toredis://localhost:6379/0)REDIS_TLS_URL,REDIS_CA_CERT,REDIS_TLS_INSECURE_SKIP_VERIFYBOT_NAME,MODE,DEVELOPER_EMAILACCESS_TTL_MIN,REFRESH_TTL_DAYSTHINKING_RATE_SECONDS,EMOTIONAL_DECAY_RATE_SECONDSWEB_UI_DOMAINAPP_ENV,DEBUG_MODE
- Use queue diagnostics endpoint during worker/queue debugging.
- Keep API and worker logs in separate terminals.
- When changing contracts, update both API and WebUI in lockstep.
See docs/CONTRIBUTING.md for contribution workflow and expectations.
MIT License.