Releases: CaviraOSS/OpenMemory
Beta v1.3.0
Changelog
api simplification
- python sdk: simplified to zero-config
Memory()api matching javascriptfrom openmemory.client import Memory→mem = Memory()- works out of the box with sensible defaults (in-memory sqlite, fast tier, synthetic embeddings)
- optional configuration via environment variables or constructor
- breaking change: moved from
OpenMemoryclass toMemoryclass
benchmark suite rewrite
- implemented comprehensive benchmark suite in
temp/benchmarks/- typescript-based using
tsxfor execution - supports longmemeval dataset evaluation
- multi-backend comparison (openmemory, mem0, zep, supermemory)
- typescript-based using
- created
src/main.tsconsolidated benchmark runner- environment validation
- backend instantiation checks
- sequential benchmark execution with detailed logging
✨ features
core improvements
-
Memory.wipe(): added database wipe functionality for testingclear_allimplementation indb.tsfor postgres and sqlite- clears memories, vectors, waypoints, and users tables
- useful for benchmark isolation and test cleanup
-
environment variable overrides:
OM_OLLAMA_MODEL: override ollama embedding modelOM_OPENAI_MODEL: override openai embedding modelOM_VEC_DIM: configure vector dimension (critical for embedding compatibility)OM_DB_PATH: sqlite database path (supports:memory:)
vector store enhancements
- added comprehensive logging to
PostgresVectorStore- logs vector storage operations with id, sector, dimension
- logs search operations with sector and result count
- aids in debugging retrieval issues
🐛 bug fixes
-
embedding configuration:
- fixed
models.tsto respectOM_OLLAMA_MODELenvironment variable - resolved dimension mismatch issues (768 vs 1536) for embeddinggemma
- ensured
OM_TIER=deepuses semantic embeddings (not synthetic fallback)
- fixed
-
benchmark data isolation:
- implemented proper database reset between benchmark runs
- fixed simhash collision issues causing cross-user contamination
- added
resetUser()functionality callingMemory.wipe()
-
configuration loading:
- fixed dotenv timing issues in benchmark suite
- ensured environment variables load before openmemory-js initialization
- corrected dataset path resolution (
longmemeval_s.json)
📚 documentation
-
comprehensive readme updates:
- root
README.md: language-agnostic, showcases both python & javascript sdks packages/openmemory-js/README.md: complete api reference, mcp integration, examplespackages/openmemory-py/README.md: zero-config usage, all embedding providers
- root
-
api documentation:
- environment variables with descriptions
- cognitive sectors explanation
- performance tiers breakdown
- embedding provider configurations
🔧 internal improvements
- type safety: added lint error handling in benchmark adapters
- code organization: separated generator, judge, and backend interfaces
- debug tooling: created dimension check script (
check_dim.ts) - logging standardization: consistent
[Component]prefix pattern
⚠️ breaking changes
- python sdk now uses
from openmemory.client import Memoryinstead offrom openmemory import OpenMemory Memory()constructor signature changed to accept optional parameters (was required)- benchmark suite moved to typescript (was python)
New Contributors
Full Changelog: v1.2.3...v1.3.0
v1.2.3
1.2.3 - 2025-12-14
Added
- Temporal Filtering: Enables precise time-based memory retrieval
- Added
startTimeandendTimefilters toquerymethod across Backend, JS SDK, and Python SDK. - Allows filtering memories by creation time range.
- Fully integrated into
hsg_querylogic.
- Added
Fixed
- JavaScript SDK Types: Fixed
IngestURLResultimport error andv.vproperty access bug inVectorStoreintegration. - Python SDK Filtering: Fixed missing implementation of
user_idand temporal filters inhsg_queryloop.
What's Changed
New Contributors
Full Changelog: v1.2.2...v1.2.3
v1.2.2
Fixed
-
MCP Server Path Resolution: Fixed ENOENT error in stdio mode (Claude Desktop)
- Enforced absolute path resolution for SQLite database
- Ensures correct data directory creation regardless of working directory
- Critical fix for local desktop client integration
-
VectorStore Refactor: Fixed build regressions in backend
- Migrated deprecated
qvector operations toVectorStoreinterface - Fixed
users.ts,memory.ts,graph.ts,mcp.ts, anddecay.ts - Removed partial SQL updates in favor of unified vector store methods
- Migrated deprecated
Added
- Valkey VectorStore Enhancements: Improved compatibility and performance
- Refined vector storage implementation for Valkey backend
- Optimized vector retrieval and storage operations
Changed
-
IDE Extension:
- Updates to Dashboard UI (
DashboardPanel.ts) and extension activation logic (extension.ts) - Configuration and dependency updates
- Updates to Dashboard UI (
-
Python SDK:
- Refinements to embedding logic (
embed.py) - Project configuration updates in
pyproject.toml
- Refinements to embedding logic (
-
Backend Maintenance:
- Dockerfile updates for improved containerization
- Updates to CLI tool (
bin/opm.js)
New Contributors
- @ajitam made their first contribution in #69
- @fparrav made their first contribution in #80
- @DAESA24 made their first contribution in #83
- @oantoshchenko made their first contribution in #84
- @therexone made their first contribution in #85
Full Changelog: 1.2.1...v1.2.2
V1.2.1
1.2.1 - Standalone Sdk
Added
-
Python SDK (
sdk-py/): SDK Overhaul, it can now perform as a standalone version of OpenMemory- Full feature parity with Backend
- Local-first architecture with SQLite backend
- Multi-sector memory (episodic, semantic, procedural, emotional, reflective)
- All embedding providers: synthetic, OpenAI, Gemini, Ollama, AWS
- Advanced features: decay, compression, reflection
- Comprehensive test suite (
sdk-py/tests/test_sdk_py.py)
-
JavaScript SDK Enhancements (
sdk-js/): SDK Overhaul, it can now perform as a standalone version of OpenMemory- Full feature parity with Backend
- Local-first architecture with SQLite backend
- Multi-sector memory (episodic, semantic, procedural, emotional, reflective)
- All embedding providers: synthetic, OpenAI, Gemini, Ollama, AWS
- Advanced features: decay, compression, reflection
-
Examples: Complete rewrite of both JS and Python examples
examples/js-sdk/basic-usage.js- CRUD operationsexamples/js-sdk/advanced-features.js- Decay, compression, reflectionexamples/js-sdk/brain-sectors.js- Multi-sector demonstrationexamples/py-sdk/basic_usage.py- Python CRUD operationsexamples/py-sdk/advanced_features.py- Advanced configurationexamples/py-sdk/brain_sectors.py- Sector demonstrationexamples/py-sdk/performance_benchmark.py- Performance testing
-
Tests: Comprehensive test suites for both SDKs
tests/js-sdk/js-sdk.test.js- Full SDK validationtests/py-sdk/test-sdk.py- Python SDK validation- Tests cover: initialisation, CRUD, sectors, advanced features
-
Architecture Documentation
- Mermaid diagram in main README showing complete data flow
- Covers all 5 cognitive sectors
- Shows embedding engine, storage layer, and recall engine
- Includes temporal knowledge graph integration
- Node.js script to regenerate diagrams
1.2.0
What's New
- Web UI to control OpenMemory
- HYBRID Tier Performance Mode
- Memory Compression Engine
What's Changed
- Decay system
- perf(vector): Optimized the aggregateVectors function by @DKB0512 in #26
- perf(embedding): Optimize embedWithLocal by @DKB0512 in #29
- perf(chunk): Optimized the combineChunk function by @DKB0512 in #27
- Add permissions for content read access by @recabasic in #31
New Contributors
- @recabasic made their first contribution in #31
Full Changelog: 1.1.1...1.2.0
1.1.1
Changelog
Added
-
Memory Compression Engine: Auto-compresses chat/memory content to reduce tokens and latency
- 5 compression algorithms: whitespace, filler, semantic, aggressive, balanced
- Auto-selects optimal algorithm based on content analysis
- Batch compression support for multiple texts
- Live savings metrics (tokens saved, latency reduction, compression ratio)
- Real-time statistics tracking across all compressions
- Integrated into memory storage with automatic compression
- REST API endpoints:
/api/compression/compress,/api/compression/batch,/api/compression/analyze,/api/compression/stats - Example usage in
examples/backend/compression-examples.mjs
-
VS Code Extension with AI Auto-Link
- Auto-links OpenMemory to 6 AI tools: Cursor, Claude, Windsurf, GitHub Copilot, Codex
- Dual mode support: Direct HTTP or MCP (Model Context Protocol)
- Status bar UI with clickable menu for easy control
- Toggle between HTTP/MCP mode in real-time
- Zero-config setup - automatically detects backend and writes configs
- Performance optimizations:
- ESH (Event Signature Hash): Deduplicates ~70% redundant saves
- HCR (Hybrid Context Recall): Sub-80ms queries with sector filtering
- MVC (Micro-Vector Cache): 32-entry LRU cache saves ~60% embedding calls
- Settings for backend URL, API key, MCP mode toggle
- Postinstall script for automatic setup
-
API Authentication & Security
- API key authentication with timing-safe comparison
- Rate limiting middleware (configurable, default 100 req/min)
- Compact 75-line auth implementation
- Environment-based configuration
-
CI/CD
- GitHub Action for automated Docker build testing
- Ensures Docker images build successfully on every push
Changed
- Optimized all compression code for maximum efficiency
1.1.0
- Add Pluggable vector dbs and PostgreSQL support
Full Changelog: 1.0.0...1.1.0
MCP
What's Changed
- Add Model Context Protocol (MCP)
- Add Tag and Metadata Filtering to HSG + API Query by @ammesonb in #1
- refactor: Dockerfile to install all dependencies and prune dev by @josephgoksu in #4
- Fix Docker build by @pc-quiknode in #5
New Contributors
- @ammesonb made their first contribution in #1
- @josephgoksu made their first contribution in #4
- @pc-quiknode made their first contribution in #5
Full Changelog: https://github.com/CaviraOSS/OpenMemory/commits/1.0.0
