A flexible, type-safe caching library for Go with support for in-memory and distributed caching.
- Type-safe generic interface using Go generics
- Multiple backends: In-memory, distributed (Redis/Valkey), or no-op
- Multiple serialization formats: Protobuf, JSON, and Go binary (gob)
- OpenTelemetry instrumentation for observability
- Health checks for distributed backends
- Simple API with context support
- Cloud Run optimized for GCP services
This library builds on proven, battle-tested components:
- In-memory cache:
github.com/jellydator/ttlcache/v2- High-performance TTL cache - Distributed cache:
github.com/redis/go-redis/v9- Redis/Valkey client with connection pooling (supports both Redis and Valkey) - Protobuf support:
google.golang.org/protobuf- Official protobuf library - Observability:
github.com/redis/go-redis/extra/redisotel/v9- OpenTelemetry instrumentation
go get github.com/dentech-floss/cacheimport "github.com/dentech-floss/cache"
// Create a memory cache
config := &cache.Config{
Type: cache.TypeMemory,
Memory: &cache.MemoryConfig{
SkipTTLExtensionOnHit: true,
},
}
c, err := cache.New[*User](config)
if err != nil {
panic(err)
}
defer c.Close()
// Use the cache
c.Set(ctx, "key", &User{ID: "123"}, 5*time.Minute)
user, found := c.Get(ctx, "key")import "github.com/dentech-floss/cache"
// Create an in-memory cache for any type
c := cache.NewMemory[*User](nil)
defer c.Close()
// Use the cache
c.Set(ctx, "key", &User{ID: "123"}, 5*time.Minute)
user, found := c.Get(ctx, "key")import "github.com/dentech-floss/cache"
// Create a distributed cache for protobuf messages
config := &cache.DistributedConfig{
Addr: "localhost:6379",
}
c, err := cache.NewDistributed[*pb.User](config)
if err != nil {
panic(err)
}
defer c.Close()
// Use the cache
c.Set(ctx, "key", &pb.User{Id: "123"}, 5*time.Minute)
user, found := c.Get(ctx, "key")import "github.com/dentech-floss/cache"
// Create a distributed cache for any type with JSON serialization
config := &cache.DistributedConfig{
Addr: "localhost:6379",
SerializationType: cache.SerializationJSON,
}
c, err := cache.NewDistributedGeneric[*User](config)
if err != nil {
panic(err)
}
defer c.Close()
// Use the cache
c.Set(ctx, "key", &User{ID: "123"}, 5*time.Minute)
user, found := c.Get(ctx, "key")import "github.com/dentech-floss/cache"
// Create a distributed cache for any type with Gob serialization (faster than JSON)
config := &cache.DistributedConfig{
Addr: "localhost:6379",
SerializationType: cache.SerializationGob,
}
c, err := cache.NewDistributedGeneric[*User](config)
if err != nil {
panic(err)
}
defer c.Close()
// Use the cache
c.Set(ctx, "key", &User{ID: "123"}, 5*time.Minute)
user, found := c.Get(ctx, "key")import "github.com/dentech-floss/cache"
// Useful for testing or when caching is disabled
c := cache.NewNoOp[*User]()All cache implementations satisfy the Cache[T] interface:
type Cache[T any] interface {
Get(ctx context.Context, key string) (T, bool)
Set(ctx context.Context, key string, value T, ttl time.Duration) error
Delete(ctx context.Context, key string) error
Close() error
}config := &cache.MemoryConfig{
SkipTTLExtensionOnHit: true, // Don't extend TTL on cache hits
}config := &cache.DistributedConfig{
Addr: "localhost:6379", // Works with both Redis and Valkey
Password: "optional-password",
DB: 0,
PoolSize: 10,
MinIdleConns: 5,
MaxRetries: 3,
DialTimeout: 5 * time.Second,
ReadTimeout: 3 * time.Second,
WriteTimeout: 3 * time.Second,
EnableTracing: true,
EnableMetrics: true,
SerializationType: cache.SerializationJSON, // or SerializationGob
Client: nil, // Optional: reuse an existing redis.UniversalClient
}Note: The distributed cache works with both Redis and Valkey servers. Simply point the Addr to your Redis or Valkey instance.
You can now supply an existing redis.UniversalClient (for example, a *redis.Client or *redis.ClusterClient) so multiple caches reuse the same connection pool:
import (
"github.com/dentech-floss/cache"
"github.com/redis/go-redis/v9"
)
sharedClient := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
userCache, err := cache.NewDistributedGeneric[*User](&cache.DistributedConfig{
Client: sharedClient,
SerializationType: cache.SerializationJSON,
})
// handle error
orderCache, err := cache.NewDistributedGeneric[*Order](&cache.DistributedConfig{
Client: sharedClient,
SerializationType: cache.SerializationGob,
})
// handle errorWhen a shared client is provided, the cache skips instrumentation and closing the client—allowing your application to manage its lifecycle centrally.
Note:
EnableTracingandEnableMetricsare ignored whenClientis supplied, because the cache cannot safely instrument a shared client. Instrument the client before passing it to the cache if you need telemetry.
-
Protobuf: For protobuf messages (automatic detection)
- Best for: Microservices communication, when you already use protobuf
- Pros: Compact, fast, schema evolution support
- Cons: Requires protobuf definitions
-
JSON: For any JSON-serializable type
- Best for: General purpose, debugging, interoperability
- Pros: Human-readable, language-agnostic, easy to debug
- Cons: Larger size, slower than binary formats
-
Gob: For any Go type (faster than JSON, but Go-specific)
- Best for: Go-only environments, performance-critical applications
- Pros: Fastest, smallest size, handles complex Go types
- Cons: Go-specific, not human-readable
- Use when: Single instance, development, testing
- Pros: Fastest, no network overhead, simple setup
- Cons: Not shared between instances, lost on restart
- Use when: Multiple instances, production, shared state
- Pros: Shared between instances, persistent, scalable
- Cons: Network overhead, requires Redis/Valkey setup
- Use when: Testing, debugging, disabling cache
- Pros: No overhead, predictable behavior
- Cons: No caching benefits
Distributed caches implement the HealthChecker interface:
if healthChecker, ok := cache.(cache.HealthChecker); ok {
err := healthChecker.Ping(ctx)
if err != nil {
// Handle unhealthy cache
}
}- Memory cache: ~1-10μs per operation
- Distributed cache: ~100-1000μs per operation (network dependent)
- Serialization overhead: Gob < Protobuf < JSON
- TTL precision: Memory cache has second precision, distributed cache has millisecond precision
The cache library follows Go's error handling conventions:
// Set operations can fail
err := cache.Set(ctx, "key", value, ttl)
if err != nil {
log.Printf("Cache set failed: %v", err)
// Continue without caching
}
// Get operations return false on cache miss or error
value, found := cache.Get(ctx, "key")
if !found {
// Cache miss - fetch from source
}
// Delete operations can fail
err := cache.Delete(ctx, "key")
if err != nil {
log.Printf("Cache delete failed: %v", err)
}- Always handle errors from Set/Delete operations
- Use context cancellation for timeout control
- Choose appropriate TTL based on your data freshness requirements
- Use NoOp cache in tests for predictable behavior
- Monitor cache hit rates and adjust TTL accordingly
- Use health checks in production for distributed caches
Apache 2.0 License