Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,6 @@ uv sync --dev
cp .env.template .env.test
# Add your API keys to .env.test

# Run test (modify CACHED_MODE in test_integration.py if needed)
uv run pytest test_integration.py::test_full_pipeline_integration -v -s
# Run Robot Framework integration tests
uv run robot --outputdir test-results --loglevel INFO tests/integration/integration_test.robot
```
9 changes: 8 additions & 1 deletion .github/workflows/robot-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,6 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: 'pip'

- name: Install uv
uses: astral-sh/setup-uv@v4
Expand Down Expand Up @@ -94,6 +93,14 @@ jobs:
TEST_DEVICE_NAME=robot-test
EOF

- name: Create test config.yml
run: |
echo "Copying test configuration file..."
mkdir -p config
cp tests/configs/deepgram-openai.yml config/config.yml
echo "✓ Test config.yml created from tests/configs/deepgram-openai.yml"
ls -lh config/config.yml

- name: Start test environment
working-directory: backends/advanced
env:
Expand Down
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,16 @@
!**/.env.template
**/memory_config.yaml
!**/memory_config.yaml.template
tests/setup/.env.test

# Main config (user-specific)
config/config.yml
!config/config.yml.template

# Config backups
config/*.backup.*
config/*.backup*

example/*
**/node_modules/*
**/ollama-data/*
Expand Down
6 changes: 3 additions & 3 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,11 +116,11 @@ cp .env.template .env # Configure API keys

# Manual test execution (for debugging)
source .env && export DEEPGRAM_API_KEY && export OPENAI_API_KEY
uv run pytest tests/test_integration.py::test_full_pipeline_integration -v -s
uv run robot --outputdir test-results --loglevel INFO ../../tests/integration/integration_test.robot

# Leave test containers running for debugging (don't auto-cleanup)
CLEANUP_CONTAINERS=false source .env && export DEEPGRAM_API_KEY && export OPENAI_API_KEY
uv run pytest tests/test_integration.py::test_full_pipeline_integration -v -s
uv run robot --outputdir test-results --loglevel INFO ../../tests/integration/integration_test.robot

# Manual cleanup when needed
docker compose -f docker-compose-test.yml down -v
Expand Down Expand Up @@ -390,7 +390,7 @@ docker compose up --build -d

### Testing Strategy
- **Local Test Scripts**: Simplified scripts (`./run-test.sh`) mirror CI workflows for local development
- **End-to-End Integration**: `test_integration.py` validates complete audio processing pipeline
- **End-to-End Integration**: Robot Framework tests (`tests/integration/integration_test.robot`) validate complete audio processing pipeline
- **Speaker Recognition Tests**: `test_speaker_service_integration.py` validates speaker identification
- **Environment Flexibility**: Tests work with both local .env files and CI environment variables
- **Automated Cleanup**: Test containers are automatically removed after execution
Expand Down
14 changes: 7 additions & 7 deletions Docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,9 @@ After configuration, verify everything works with the integration test suite:

# Alternative: Manual test with detailed logging
source .env && export DEEPGRAM_API_KEY OPENAI_API_KEY && \
uv run pytest tests/test_integration.py -vv -s --log-cli-level=INFO
uv run robot --outputdir ../../test-results --loglevel INFO ../../tests/integration/integration_test.robot
```
This end-to-end test validates the complete audio processing pipeline.
This end-to-end test validates the complete audio processing pipeline using Robot Framework.

## Using the System

Expand Down Expand Up @@ -342,7 +342,7 @@ curl -X POST "http://localhost:8000/api/process-audio-files" \

**Implementation**:
- **Memory System**: `src/advanced_omi_backend/memory/memory_service.py` + `src/advanced_omi_backend/controllers/memory_controller.py`
- **Configuration**: memory settings in `config.yml` (memory section)
- **Configuration**: memory settings in `config/config.yml` (memory section)

### Authentication & Security
- **Email Authentication**: Login with email and password
Expand Down Expand Up @@ -541,10 +541,10 @@ OPENMEMORY_MCP_URL=http://host.docker.internal:8765

> 🎯 **New to memory configuration?** Read our [Memory Configuration Guide](./memory-configuration-guide.md) for a step-by-step setup guide with examples.

The system uses **centralized configuration** via `config.yml` for all models (LLM, embeddings, vector store) and memory extraction settings.
The system uses **centralized configuration** via `config/config.yml` for all models (LLM, embeddings, vector store) and memory extraction settings.

### Configuration File Location
- **Path**: repository `config.yml` (override with `CONFIG_FILE` env var)
- **Path**: repository `config/config.yml` (override with `CONFIG_FILE` env var)
- **Hot-reload**: Changes are applied on next processing cycle (no restart required)
- **Fallback**: If file is missing, system uses safe defaults with environment variables

Expand Down Expand Up @@ -613,7 +613,7 @@ If you experience JSON parsing errors in fact extraction:

2. **Enable fact extraction** with reliable JSON output:
```yaml
# In config.yml (memory section)
# In config/config.yml (memory section)
fact_extraction:
enabled: true # Safe to enable with GPT-4o
```
Expand Down Expand Up @@ -727,5 +727,5 @@ curl -H "Authorization: Bearer $ADMIN_TOKEN" \
- **Connect audio clients** using the WebSocket API
- **Explore the dashboard** to manage conversations and users
- **Review the user data architecture** for understanding data organization
- **Customize memory extraction** by editing the `memory` section in `config.yml`
- **Customize memory extraction** by editing the `memory` section in `config/config.yml`
- **Monitor processing performance** using debug API endpoints
12 changes: 4 additions & 8 deletions backends/advanced/.env.template
Original file line number Diff line number Diff line change
Expand Up @@ -45,18 +45,14 @@ OPENAI_MODEL=gpt-4o-mini
# CHAT_TEMPERATURE=0.7

# ========================================
# SPEECH-TO-TEXT CONFIGURATION (Choose one)
# SPEECH-TO-TEXT CONFIGURATION (API Keys Only)
# ========================================
# Provider selection is in config.yml (defaults.stt)

# Option 1: Deepgram (recommended for best transcription quality)
# Deepgram (cloud-based, recommended)
DEEPGRAM_API_KEY=

# Option 2: Parakeet ASR service from extras/asr-services
# PARAKEET_ASR_URL=http://host.docker.internal:8767

# Optional: Specify which provider to use ('deepgram' or 'parakeet')
# If not set, will auto-select based on available configuration (Deepgram preferred)
# TRANSCRIPTION_PROVIDER=
# Note: Parakeet ASR URL configured in config.yml

# ========================================
# SPEECH DETECTION CONFIGURATION
Expand Down
20 changes: 10 additions & 10 deletions backends/advanced/Docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Welcome to chronicle! This guide provides the optimal reading sequence to unders
- What the system does (voice → memories)
- Key features and capabilities
- Basic setup and configuration
- **Code References**: `src/advanced_omi_backend/main.py`, `config.yml`, `docker-compose.yml`
- **Code References**: `src/advanced_omi_backend/main.py`, `config/config.yml`, `docker-compose.yml`

### 2. **[System Architecture](./architecture.md)**
**Read second** - Complete technical architecture with diagrams
Expand Down Expand Up @@ -70,7 +70,7 @@ Welcome to chronicle! This guide provides the optimal reading sequence to unders

## 🔍 **Configuration & Customization**

### 6. **Configuration File** → `../config.yml`
### 6. **Configuration File** → `../config/config.yml`
**Central configuration for all extraction**
- Memory extraction settings and prompts
- Quality control and debug settings
Expand All @@ -86,11 +86,11 @@ Welcome to chronicle! This guide provides the optimal reading sequence to unders
1. [quickstart.md](./quickstart.md) - System overview
2. [architecture.md](./architecture.md) - Technical architecture
3. `src/advanced_omi_backend/main.py` - Core imports and setup
4. `config.yml` - Configuration overview
4. `config/config.yml` - Configuration overview

### **"I want to work on memory extraction"**
1. [memories.md](./memories.md) - Memory system details
2. `../config.yml` - Models and memory configuration
2. `../config/config.yml` - Models and memory configuration
3. `src/advanced_omi_backend/memory/memory_service.py` - Implementation
4. `src/advanced_omi_backend/controllers/memory_controller.py` - Processing triggers

Expand Down Expand Up @@ -130,7 +130,7 @@ backends/advanced-backend/
│ │ └── memory_service.py # Memory system (Mem0)
│ └── model_registry.py # Configuration loading
├── config.yml # 📋 Central configuration
├── config/config.yml # 📋 Central configuration
├── MEMORY_DEBUG_IMPLEMENTATION.md # Debug system details
```

Expand All @@ -148,7 +148,7 @@ backends/advanced-backend/

### **Configuration**
- **Loading**: `src/advanced_omi_backend/model_registry.py`
- **File**: `config.yml`
- **File**: `config/config.yml`
- **Usage**: `src/advanced_omi_backend/memory/memory_service.py`

### **Authentication**
Expand All @@ -162,7 +162,7 @@ backends/advanced-backend/

1. **Follow the references**: Each doc links to specific code files and line numbers
2. **Use the debug API**: `GET /api/debug/memory/stats` shows live system status
3. **Check configuration first**: Many behaviors are controlled by `config.yml`
3. **Check configuration first**: Many behaviors are controlled by `config/config.yml`
4. **Understand the memory pipeline**: Memories (end-of-conversation)
5. **Test with curl**: All API endpoints have curl examples in the docs

Expand All @@ -175,20 +175,20 @@ backends/advanced-backend/
1. **Set up the system**: Follow [quickstart.md](./quickstart.md) to get everything running
2. **Test the API**: Use the curl examples in the documentation to test endpoints
3. **Explore the debug system**: Check `GET /api/debug/memory/stats` to see live data
4. **Modify configuration**: Edit `config.yml` (memory section) to see how it affects extraction
4. **Modify configuration**: Edit `config/config.yml` (memory section) to see how it affects extraction
5. **Read the code**: Start with `src/advanced_omi_backend/main.py` and follow the references in each doc

### **Contributing Guidelines**

- **Add code references**: When updating docs, include file paths and line numbers
- **Test your changes**: Use the debug API to verify your modifications work
- **Update configuration**: Add new settings to `config.yml` when needed
- **Update configuration**: Add new settings to `config/config.yml` when needed
- **Follow the architecture**: Keep memories in their respective services

### **Getting Help**

- **Debug API**: `GET /api/debug/memory/*` endpoints show real-time system status
- **Configuration**: Check `config.yml` for behavior controls
- **Configuration**: Check `config/config.yml` for behavior controls
- **Logs**: Check Docker logs with `docker compose logs chronicle-backend`
- **Documentation**: Each doc file links to relevant code sections

Expand Down
4 changes: 2 additions & 2 deletions backends/advanced/Docs/contribution.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
1. Docs/quickstart.md (15 min)
2. Docs/architecture.md (20 min)
3. main.py - just the imports and WebSocket sections (15 min)
4. config.yml (memory section) (10 min)
4. config/config.yml (memory section) (10 min)

🔧 "I want to work on memory extraction"

1. Docs/quickstart.md → Docs/memories.md
2. config.yml (memory.extraction section)
2. config/config.yml (memory.extraction section)
3. main.py lines 1047-1065 (trigger)
4. main.py lines 1163-1195 (processing)
5. src/memory/memory_service.py
Expand Down
4 changes: 2 additions & 2 deletions backends/advanced/Docs/memories.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This document explains how to configure and customize the memory service in the
- **Repository Layer**: `src/advanced_omi_backend/conversation_repository.py` (clean data access)
- **Processing Manager**: `src/advanced_omi_backend/processors.py` (MemoryProcessor class)
- **Conversation Management**: `src/advanced_omi_backend/conversation_manager.py` (lifecycle coordination)
- **Configuration**: `config.yml` (memory section) + `src/model_registry.py`
- **Configuration**: `config/config.yml` (memory section) + `src/model_registry.py`

## Overview

Expand Down Expand Up @@ -180,7 +180,7 @@ OPENAI_MODEL=gpt-5-mini # Recommended for reliable JSON output
# OPENAI_MODEL=gpt-3.5-turbo # Budget option
```

Or configure via `config.yml` (memory block):
Or configure via `config/config.yml` (memory block):

```yaml
memory_extraction:
Expand Down
6 changes: 3 additions & 3 deletions backends/advanced/Docs/memory-configuration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ This guide helps you set up and configure the memory system for the Friend Advan

1. **Copy the template configuration**:
```bash
Edit the `memory` section of `config.yml`.
Edit the `memory` section of `config/config.yml`.
```

2. **Edit `config.yml`** with your preferred settings in the `memory` section:
2. **Edit `config/config.yml`** with your preferred settings in the `memory` section:
```yaml
memory:
provider: "mem0" # or "basic" for simpler setup
Expand Down Expand Up @@ -127,6 +127,6 @@ memory:

## Next Steps

- Configure action items detection in `config.yml` (memory.extraction)
- Configure action items detection in `config/config.yml` (memory.extraction)
- Set up custom prompt templates for your use case
- Monitor memory processing in the debug dashboard
14 changes: 7 additions & 7 deletions backends/advanced/Docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,9 +177,9 @@ After configuration, verify everything works with the integration test suite:

# Alternative: Manual test with detailed logging
source .env && export DEEPGRAM_API_KEY OPENAI_API_KEY && \
uv run pytest tests/test_integration.py -vv -s --log-cli-level=INFO
uv run robot --outputdir ../../test-results --loglevel INFO ../../tests/integration/integration_test.robot
```
This end-to-end test validates the complete audio processing pipeline.
This end-to-end test validates the complete audio processing pipeline using Robot Framework.

## Using the System

Expand Down Expand Up @@ -340,7 +340,7 @@ curl -X POST "http://localhost:8000/api/audio/upload" \

**Implementation**:
- **Memory System**: `src/advanced_omi_backend/memory/memory_service.py` + `src/advanced_omi_backend/controllers/memory_controller.py`
- **Configuration**: `config.yml` (memory + models) in repo root
- **Configuration**: `config/config.yml` (memory + models) in repo root

### Authentication & Security
- **Email Authentication**: Login with email and password
Expand Down Expand Up @@ -539,10 +539,10 @@ OPENMEMORY_MCP_URL=http://host.docker.internal:8765

> 🎯 **New to memory configuration?** Read our [Memory Configuration Guide](./memory-configuration-guide.md) for a step-by-step setup guide with examples.

The system uses **centralized configuration** via `config.yml` for all memory extraction and model settings.
The system uses **centralized configuration** via `config/config.yml` for all memory extraction and model settings.

### Configuration File Location
- **Path**: `config.yml` in repo root
- **Path**: `config/config.yml` in repo root
- **Hot-reload**: Changes are applied on next processing cycle (no restart required)
- **Fallback**: If file is missing, system uses safe defaults with environment variables

Expand Down Expand Up @@ -611,7 +611,7 @@ If you experience JSON parsing errors in fact extraction:

2. **Enable fact extraction** with reliable JSON output:
```yaml
# In config.yml (memory section)
# In config/config.yml (memory section)
fact_extraction:
enabled: true # Safe to enable with GPT-4o
```
Expand Down Expand Up @@ -725,5 +725,5 @@ curl -H "Authorization: Bearer $ADMIN_TOKEN" \
- **Connect audio clients** using the WebSocket API
- **Explore the dashboard** to manage conversations and users
- **Review the user data architecture** for understanding data organization
- **Customize memory extraction** by editing the `memory` section in `config.yml`
- **Customize memory extraction** by editing the `memory` section in `config/config.yml`
- **Monitor processing performance** using debug API endpoints
21 changes: 14 additions & 7 deletions backends/advanced/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,14 +100,21 @@ See [Docs/HTTPS_SETUP.md](Docs/HTTPS_SETUP.md) for detailed configuration.
To run integration tests with different transcription providers:

```bash
# Test with Parakeet ASR (offline transcription)
# Automatically starts test ASR service - no manual setup required
source .env && export DEEPGRAM_API_KEY && export OPENAI_API_KEY && TRANSCRIPTION_PROVIDER=parakeet uv run pytest tests/test_integration.py::test_full_pipeline_integration -v -s --tb=short
# Test with different configurations using config.yml files
# Test configs located in tests/configs/

# Test with Deepgram (default)
source .env && export DEEPGRAM_API_KEY && export OPENAI_API_KEY && uv run pytest tests/test_integration.py::test_full_pipeline_integration -v -s --tb=short
# Test with Parakeet ASR + Ollama (offline, no API keys)
CONFIG_FILE=../../tests/configs/parakeet-ollama.yml ./run-test.sh

# Test with Deepgram + OpenAI (cloud-based)
CONFIG_FILE=../../tests/configs/deepgram-openai.yml ./run-test.sh

# Manual Robot Framework test execution
source .env && export DEEPGRAM_API_KEY OPENAI_API_KEY && \
uv run robot --outputdir ../../test-results --loglevel INFO ../../tests/integration/integration_test.robot
```

**Prerequisites:**
- API keys configured in `.env` file
- For debugging: Set `CACHED_MODE = True` in test file to keep containers running
- API keys configured in `.env` file (for cloud providers)
- Test configurations in `tests/configs/` directory
- For debugging: Set `CLEANUP_CONTAINERS=false` environment variable to keep containers running
2 changes: 1 addition & 1 deletion backends/advanced/SETUP_SCRIPTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This document explains the different setup scripts available in Friend-Lite and

| Script | Purpose | When to Use |
|--------|---------|-------------|
| `init.py` | **Main interactive setup wizard** | **Recommended for all users** - First time setup with guided configuration (located at repo root). Memory now configured in `config.yml`. |
| `init.py` | **Main interactive setup wizard** | **Recommended for all users** - First time setup with guided configuration (located at repo root). Memory now configured in `config/config.yml`. |
| `setup-https.sh` | HTTPS certificate generation | **Optional** - When you need secure connections for microphone access |

## Main Setup Script: `init.py`
Expand Down
Loading