Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
106 changes: 106 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
name: CI

on:
push:
pull_request:

jobs:
unit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install deps
run: |
python -m pip install --upgrade pip
pip install uv
uv pip install .[tests]
- name: Run unit tests
run: pytest -q

s3-integration:
runs-on: ubuntu-latest
services:
minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z-cpuv1
env:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- 9000:9000
options: >-
--health-cmd "curl -f http://localhost:9000/minio/health/ready || exit 1"
--health-interval 5s
--health-timeout 5s
--health-retries 10
command: server /data --console-address :9001
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install deps
run: |
python -m pip install --upgrade pip
pip install uv
uv pip install .[tests tests-s3]
- name: Configure MinIO bucket
env:
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
AWS_REGION: us-east-1
S3_ENDPOINT_URL: http://localhost:9000
run: |
python - <<'PY'
import boto3
import os
s3 = boto3.client(
"s3",
endpoint_url=os.environ["S3_ENDPOINT_URL"],
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
region_name=os.environ["AWS_REGION"],
)
for name in ["test-bkt", "test-bkt2", "test-bkt-swr", "test-bkt-chain"]:
try:
s3.create_bucket(Bucket=name)
except Exception:
pass
PY
- name: Run S3 integration tests
env:
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
AWS_REGION: us-east-1
S3_ENDPOINT_URL: http://localhost:9000
run: pytest -q tests/test_s3_cache_integration.py

gcs-integration:
runs-on: ubuntu-latest
services:
fake-gcs:
image: fsouza/fake-gcs-server:latest
ports:
- 4443:4443
options: >-
--health-cmd "curl -f http://localhost:4443/storage/v1/b || exit 1"
--health-interval 5s
--health-timeout 5s
--health-retries 10
command: ["-scheme", "http", "-public-host", "localhost:4443"]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install deps
run: |
python -m pip install --upgrade pip
pip install uv
uv pip install .[tests tests-gcs]
- name: Run GCS integration tests
env:
STORAGE_EMULATOR_HOST: http://localhost:4443
run: pytest -q -m integration tests/test_gcs_cache_integration.py
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,21 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.2.2-beta] - 2025-12-25

### Added
- **LocalFileCache**: filesystem-backed storage with TTL, atomic writes, optional compression, and dedupe to skip identical rewrites.
- **ChainCache**: composable multi-level cache (e.g., InMem -> Redis -> S3/GCS/local file) with read-through promotion and write-through semantics.
- **Dedupe writes**: opt-in for RedisCache, S3Cache, GCSCache, and LocalFileCache to avoid rewriting unchanged payloads.
- **Docs**: production-grade BGCache writer/reader guide (`docs/bgcache.md`) now shows Single-Writer/Multi-Reader with ChainCache cold tiers (S3/GCS/LocalFileCache) and per-process readers.
- README updates for ChainCache, dedupe_writes, LocalFileCache.
- **Tests**: integration coverage for LocalFileCache (TTL expiry, dedupe, decorator usage, ChainCache integration).
- **Refactor**: storage backends split into `advanced_caching.storage` package (per-backend modules) while preserving public exports.

### Fixed
- Redis dedupe now extends TTL when skipping identical writes.
- SharedAsyncScheduler uses current event loop when available (stability for async BGCache).

## [0.2.1] - 2025-12-25

### Fixed
Expand All @@ -13,6 +28,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added
- `configure()` class method on all decorators to easily create pre-configured cache instances (e.g., `MyCache = TTLCache.configure(cache=RedisCache(...))`).
- **Object Storage Backends**: Added `S3Cache` (AWS) and `GCSCache` (Google Cloud) for cost-effective storage of large objects.
- Features: Metadata-based TTL checks (saves download costs), Gzip compression, and pluggable serializers.

## [0.2.0] - 2025-12-23

Expand Down
Loading
Loading