Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
b8c7f3e
Refactor control flow of Overseer.with/2
whitfin Oct 25, 2025
36e8141
Combine Overseer.get/1 and Overseer.retrieve/1
whitfin Oct 25, 2025
48f92e6
Raise ArgumentError when caches do not exist
whitfin Oct 27, 2025
d27c66b
Remove tagging from size/2 and exists?/3
whitfin Oct 28, 2025
9d657ed
Remove tagging from update operations
whitfin Oct 29, 2025
dd085c0
Strip out tagging from various read operations
whitfin Oct 29, 2025
ad7de3e
Strip unnecessary tags from cache streams
whitfin Oct 29, 2025
64ffdb7
Remove tagging from inspection and commands
whitfin Oct 30, 2025
57aef5c
Remove redundant tags from get/3 and fetch/4
whitfin Oct 30, 2025
c8a5ace
Remove redundant tags from write operations
whitfin Oct 30, 2025
1338d53
Replace tags inside stats and blocks
whitfin Oct 31, 2025
3133589
Improve router peformance for local actions
whitfin Nov 10, 2025
47bce61
Remove unnecessary whereis lookups
whitfin Nov 10, 2025
08b2403
Replace true with :ok for various signatures
whitfin Nov 10, 2025
179d572
Add returned number of prune entries in prune/3
whitfin Nov 10, 2025
a9c0eea
Add a default parameter to get/3 and friends
whitfin Nov 11, 2025
be25ff2
Add :commit support to invoke/4
whitfin Nov 17, 2025
6b0248b
Remove the first batch of bang delegates
whitfin Dec 1, 2025
6955eb1
Update missing test cases
whitfin Dec 20, 2025
cca542e
Rename long_form/1 to explain/1 for clarity
whitfin Dec 20, 2025
532b9cc
Update CI to include Elixir v1.19
whitfin Dec 20, 2025
f4361f7
Simplify startup flow and typing
whitfin Dec 20, 2025
9e78912
Sweep documentation for outdated signatures
whitfin Dec 22, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ jobs:
fail-fast: false
matrix:
elixir:
- '1.19'
- '1.18'
- '1.17'
- '1.16'
Expand Down Expand Up @@ -47,7 +48,7 @@ jobs:
name: Benchmark
runs-on: ubuntu-latest
container:
image: elixir:1.18
image: elixir:1.19
steps:
- uses: actions/checkout@v4

Expand All @@ -65,7 +66,7 @@ jobs:
name: Coverage
runs-on: ubuntu-latest
container:
image: elixir:1.18
image: elixir:1.19
env:
MIX_ENV: cover
steps:
Expand All @@ -87,7 +88,7 @@ jobs:
name: Documentation
runs-on: ubuntu-latest
container:
image: elixir:1.18
image: elixir:1.19
steps:
- uses: actions/checkout@v4

Expand All @@ -111,7 +112,7 @@ jobs:
name: Linting
runs-on: ubuntu-latest
container:
image: elixir:1.18
image: elixir:1.19
steps:
- uses: actions/checkout@v4

Expand Down
41 changes: 19 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,40 +57,37 @@ Let's take a quick look at some basic calls you can make to a cache in a quick `
{:ok, _pid} = Cachex.start_link(:my_cache)

# place a "my_value" string against the key "my_key"
{:ok, true} = Cachex.put(:my_cache, "my_key", "my_value")
:ok = Cachex.put(:my_cache, "my_key", "my_value")

# verify that the key exists under the key name
{:ok, true} = Cachex.exists?(:my_cache, "my_key")
true = Cachex.exists?(:my_cache, "my_key")

# verify that "my_value" is returned when we retrieve
{:ok, "my_value"} = Cachex.get(:my_cache, "my_key")
"my_value" = Cachex.get(:my_cache, "my_key")

# remove the "my_key" key from the cache
{:ok, true} = Cachex.del(:my_cache, "my_key")
:ok = Cachex.del(:my_cache, "my_key")

# verify that the key no longer exists
{:ok, false} = Cachex.exists?(:my_cache, "my_key")
false = Cachex.exists?(:my_cache, "my_key")

# verify that "my_value" is no longer returned
{:ok, nil} = Cachex.get(:my_cache, "my_key")
nil = Cachex.get(:my_cache, "my_key")
```

It's worth noting here that the actions supported by the Cachex API have an automatically generated "unsafe" equivalent (i.e. appended with `!`). These options will unpack the returned tuple, and return values directly:
For cache actions which are fallible and can return an error tuple, the Cachex API provides an automatically generated "unsafe" equivalent (i.e. appended with `!`). These options will unpack the returned tuple, and return values directly:

```elixir
# calling by default will return a tuple
{:ok, nil} = Cachex.get(:my_cache, "key")
# write a non-numeric value to the table
:ok = Cachex.put(:my_cache, "one", "one")

# calling with `!` unpacks the tuple
nil = Cachex.get!(:my_cache, "key")

# causing an error will return an error tuple value
{:error, :no_cache} = Cachex.get(:missing_cache, "key")
# attempt to increment a non-numeric value and get an error
{:error, :non_numeric_value} = Cachex.incr(:my_cache, "one")

# but calling with `!` raises the error
Cachex.get!(:missing_cache, "key")
** (Cachex.Error) Specified cache not running
(cachex) lib/cachex.ex:249: Cachex.get!/3
Cachex.incr!(:my_cache, "one")
** (Cachex.Error) Attempted arithmetic operations on a non-numeric value
(cachex) lib/cachex.ex:1439: Cachex.unwrap/1
```

The `!` version of functions exists for convenience, in particular to make chaining and assertions easier in unit testing. For production use cases it's recommended to avoid `!` wrappers, and instead explicitly handle the different response types.
Expand All @@ -106,7 +103,7 @@ While the list is too long to properly cover everything in detail here, let's ta
{:ok, _pid} = Cachex.start_link(:my_cache)

# place some values in a single batch call
{:ok, true} = Cachex.put_many(:my_cache, [
:ok = Cachex.put_many(:my_cache, [
{"key1", 1},
{"key2", 2},
{"key3", 3}
Expand All @@ -118,21 +115,21 @@ While the list is too long to properly cover everything in detail here, let's ta
end)

# we can also do this via `Cachex.incr/2`
{:ok, 2} = Cachex.incr(:my_cache, "key2")
2 = Cachex.incr(:my_cache, "key2")

# and of course the inverse via `Cachex.decr/2`
{:ok, 0} = Cachex.decr(:my_cache, "key3")
0 = Cachex.decr(:my_cache, "key3")

# we can also lazily compute keys if they're missing from the cache
{:commit, "nazrat"} = Cachex.fetch(:my_cache, "tarzan", fn key ->
{:commit, String.reverse(key)}
end)

# we can also write keys with a time expiration (in milliseconds)
{:ok, true} = Cachex.put(:my_cache, "secret_mission", "...", expire: 1)
:ok = Cachex.put(:my_cache, "secret_mission", "...", expire: 1)

# and if we pull it back after expiration, it's not there!
{:ok, nil} = Cachex.get(:my_cache, "secret_mission")
nil = Cachex.get(:my_cache, "secret_mission")
```

These are just some of the conveniences made available by Cachex's API, but there's still a bunch of other fun stuff in the `Cachex` API, covering a broad range of patterns and use cases.
Expand Down
29 changes: 20 additions & 9 deletions docs/extensions/custom-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,20 @@ Cachex.start_link(:my_cache, [
])
```

Each command receives a cache value to operate on and return. A command flagged as `:read` (such as `:last` above) will simply transforms the cache value before the final command return occurs, allowing the cache to mask complicated logic from the calling module. Commands flagged as `:write` are a little more complicated, but still fairly easy to grasp. These commands *must* return a 2-element tuple, with the return value in index `0` and the new cache value in index `1`.
Each command receives a cache value to operate on an return. A command flagged as `:read` will simply transform the cache value before it's returned the user, allowing a developer to mask complicated logic directly in the cache itself rather than the calling module. This is suitable for storing specific structures in your cache and allowing "direct" operations on them (i.e. lists, maps, etc.).

It is important to note that custom cache commands _will_ receive `nil` values in the cache of a missing cache key. If you're using a `:write` command and receive a misisng value, your returned modified value will only be written back to the cache if it's no longer `nil`. This allows the developer to implement logic such as lazy loading, but also escape the situation where you're cornered into writing to the cache.
Commands flagged as `:write` as a little more complicated, but still fairly easy to grasp. These commands *must* always resolve to a 2 element tuple, with the value to return from the call at index `0` and the new cache value in index `1`. You can either return a 2 element tuple as-is, or it can be contained in the `:commit` interfaces of Cachex:

```elixir
lpop = fn
([ head | tail ]) ->
{:commit, {head, tail}}
(_) ->
{:ignore, nil}
end
```

This provides uniform handling across other cache interfaces, and makes it possible to implement things like lazy loading while providing an escape for the developer in cases where writing should be skipped. This is not perfect, so behaviour here may change in future as new options become available.

## Invoking Commands

Expand All @@ -50,19 +61,19 @@ Let's look at some examples of calling the new `:last` and `:lpop` commands we d

```elixir
# place a new list into our cache of 3 elements
{ :ok, true } = Cachex.put(:my_cache, "my_list", [ 1, 2, 3 ])
:ok = Cachex.put(:my_cache, "my_list", [ 1, 2, 3 ])

# check the last value in the list stored under "my_list"
{ :ok, 3 } = Cachex.invoke(:my_cache, :last, "my_list")
3 = Cachex.invoke(:my_cache, :last, "my_list")

# pop all values from the list stored under "my_list"
{ :ok, 1 } = Cachex.invoke(:my_cache, :lpop, "my_list")
{ :ok, 2 } = Cachex.invoke(:my_cache, :lpop, "my_list")
{ :ok, 3 } = Cachex.invoke(:my_cache, :lpop, "my_list")
{ :ok, nil } = Cachex.invoke(:my_cache, :lpop, "my_list")
1 = Cachex.invoke(:my_cache, :lpop, "my_list")
2 = Cachex.invoke(:my_cache, :lpop, "my_list")
3 = Cachex.invoke(:my_cache, :lpop, "my_list")
nil = Cachex.invoke(:my_cache, :lpop, "my_list")

# check the last value in the list stored under "my_list"
{ :ok, nil } = Cachex.invoke(:my_cache, :last, "my_list")
nil = Cachex.invoke(:my_cache, :last, "my_list")
```

We can see how both commands are doing their job and we're left with an empty list at the end of this snippet. At the time of writing there are no options recognised by `Cachex.invoke/4` even though there _is_ an optional fourth parameter for options, it's simply future proofing.
Expand Down
4 changes: 2 additions & 2 deletions docs/extensions/execution-lifecycle.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,10 +56,10 @@ Below is an example just to show this in context of a cache call, assuming we're

```elixir
# given this cache call and result
{ :ok, "value" } = Cachex.get(:my_cache, "key")
"value" = Cachex.get(:my_cache, "key")

# you would receive these notification params
{ :get, [ :my_cache, "key" ] }, { :ok, "value" }
{ :get, [ :my_cache, "key" ], "value" }
```

Using this pattern makes it simple to hook into specific actions or specific cases (such as error cases), which is a powerful tool enabled by a very simple interface.
Expand Down
24 changes: 12 additions & 12 deletions docs/general/batching-actions.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,17 @@ The simplest way to make several cache calls together is `Cachex.execute/3`. Thi

```elixir
# standard way to execute several actions
r1 = Cachex.get!(:my_cache, "key1")
r2 = Cachex.get!(:my_cache, "key2")
r3 = Cachex.get!(:my_cache, "key3")
r1 = Cachex.get(:my_cache, "key1")
r2 = Cachex.get(:my_cache, "key2")
r3 = Cachex.get(:my_cache, "key3")

# using Cachex.execute/3 to optimize the batch of calls
{r1, r2, r3} =
Cachex.execute!(:my_cache, fn cache ->
# execute our batch of actions
r1 = Cachex.get!(cache, "key1")
r2 = Cachex.get!(cache, "key2")
r3 = Cachex.get!(cache, "key3")
r1 = Cachex.get(cache, "key1")
r2 = Cachex.get(cache, "key2")
r3 = Cachex.get(cache, "key3")

# pass back all results as a tuple
{r1, r2, r3}
Expand All @@ -41,13 +41,13 @@ It's important to note that even though you're executing a batch of actions, oth
# start our execution block
Cachex.execute!(:my_cache, fn cache ->
# set a base value in the cache
Cachex.put!(cache, "key", "value")
Cachex.put(cache, "key", "value")

# we're paused but other changes can happen
:timer.sleep(5000)

# this may have have been set elsewhere
Cachex.get!(cache, "key")
Cachex.get(cache, "key")
end)
```

Expand All @@ -61,15 +61,15 @@ The entry point to a Cachex transaction is (unsurprisingly) `Cachex.transaction/

```elixir
# start our execution block
Cachex.transaction!(:my_cache, ["key"], fn cache ->
Cachex.transaction(:my_cache, ["key"], fn cache ->
# set a base value in the cache
Cachex.put!(cache, "key", "value")
Cachex.put(cache, "key", "value")

# we're paused but other changes will not happen
:timer.sleep(5000)

# this will be guaranteed to return "value"
Cachex.get!(cache, "key")
Cachex.get(cache, "key")
end)
```

Expand All @@ -78,7 +78,7 @@ It's critical to provide the keys you wish to lock when calling `Cachex.transact
Another pattern which may prove useful is providing an empty list of keys, which will guarantee that your transaction runs at a time when no keys in the cache are currently locked. For example, the following code will guarantee that no keys are locked when purging expired records:

```elixir
Cachex.transaction!(:my_cache, [], fn cache ->
Cachex.transaction(:my_cache, [], fn cache ->
Cachex.purge!(cache)
end)
```
Expand Down
6 changes: 3 additions & 3 deletions docs/general/local-persistence.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Cachex ships with basic support for saving a cache to a local file using the [Ex
To save a cache to a file on disk, you can use the `Cachex.save/3` function. This function will handle compression automatically and populate the path on disk with a file you can import later. It should be noted that the internal format of this file should not be relied upon.

```elixir
{ :ok, true } = Cachex.save(:my_cache, "/tmp/my_cache.dat")
:ok = Cachex.save(:my_cache, "/tmp/my_cache.dat")
```

The above demonstrates how simple it is to save your cache to a location on disk (in this case `/tmp/my_cache.dat`). Any options can be provided as a `Keyword` list as an optional third parameter.
Expand All @@ -18,10 +18,10 @@ To seed a cache from an existing file, you can use `Cachex.restore/3`. This will

```elixir
# optionally clean your cache first
{ :ok, _amt } = Cachex.clear(:my_cache)
amount = Cachex.clear(:my_cache)

# then you can load the existing save into your cache
{ :ok, true } = Cachex.restore(:my_cache, "/tmp/my_cache.dat")
^amount = Cachex.restore(:my_cache, "/tmp/my_cache.dat")
```

Please note that loading from an existing file will maintain all existing expirations, and records which have already expired will *not* be added to the cache table. This should not be surprising, but it is worth calling out.
9 changes: 6 additions & 3 deletions docs/general/streaming-records.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,17 @@ Cachex provides the ability to create an Elixir `Stream` seeded by the contents
By default, `Cachex.stream/3` will return a `Stream` over all entries in a cache which are yet to expire (at the time of stream creation). These cache entries will be streamed as `Cachex.Spec.entry` records, so you can use pattern matching to pull any of the entry fields assuming you have `Cachex.Spec` imported:

```elixir
# for matching
import Cachex.Spec

# store some values in the cache
Cachex.start(:my_cache)
Cachex.put(:my_cache, "one", 1)
Cachex.put(:my_cache, "two", 2)
Cachex.put(:my_cache, "three", 3)

# create our cache stream of all records
{ :ok, stream } = Cachex.stream(:my_cache)
stream = Cachex.stream(:my_cache)

# sum up all the cache record values, which == 6
Enum.reduce(stream, 0, fn entry(value: value), total ->
Expand Down Expand Up @@ -46,7 +49,7 @@ query = Cachex.Query.build(where: filter, output: :value)

# == 4
:my_cache
|> Cachex.stream!(query)
|> Cachex.stream(query)
|> Enum.sum()
```

Expand All @@ -72,7 +75,7 @@ query = Cachex.Query.build(where: filter, output: :value)

# == 4
:my_cache
|> Cachex.stream!(query)
|> Cachex.stream(query)
|> Enum.sum()
```

Expand Down
20 changes: 11 additions & 9 deletions docs/management/limiting-caches.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Limiting Caches

Cache limits are restrictions on a cache to ensure that it stays within given bounds. The limits currently shipped inside Cachex are based around the number of entries inside a cache, but there are plans to add new policies in future (for example basing the limits on memory spaces). You even even write your own!
Cache limits are restrictions on a cache to ensure that it stays within given bounds. The limits currently shipped inside Cachex are based around the number of entries inside a cache, but there are plans to add new policies in future (for example basing the limits on memory spaces). You can even write your own!

## Manual Pruning

Expand All @@ -14,19 +14,21 @@ Cachex.start(:my_cache)

# insert 100 keys
for i <- 1..100 do
Cachex.put!(:my_cache, i, i)
Cachex.put(:my_cache, i, i)
end

# guarantee we have 100 keys in the cache
{ :ok, 100 } = Cachex.size(:my_cache)
100 = Cachex.size(:my_cache)

# trigger a pruning down to 50 keys only
{ :ok, true } = Cachex.prune(:my_cache, 50, reclaim: 0)
50 = Cachex.prune(:my_cache, 50, reclaim: 0)

# verify that we're down to 50 keys
{ :ok, 50 } = Cachex.size(:my_cache)
50 = Cachex.size(:my_cache)
```

As part of pruning, `Cachex.prune/3` will trigger a call to `Cachex.purge/2` to first remove expired entries before cutting potentially unnecessary entries. While the return value of `Cachex.prune/3` represents how many cache entries were *pruned*, it should be noted that the number of expired entries is not included in this value.

The `:reclaim` option can be used to reduce thrashing, by evicting an additional number of entries. In the case above the next write would cause the cache to once again need pruning, and then so on. The `:reclaim` option accepts a percentage (as a decimal) of extra keys to evict, which gives us a buffer between pruning of a cache.

To demonstrate this we can run the same example as above, except using a `:reclaim` of `0.1` (the default). This time we'll be left with 45 keys instead of 50, as we reclaimed an extra 10% of the table (`50 * 0.1 = 5`):
Expand All @@ -37,17 +39,17 @@ Cachex.start(:my_cache)

# insert 100 keys
for i <- 1..100 do
Cachex.put!(:my_cache, i, i)
Cachex.put(:my_cache, i, i)
end

# guarantee we have 100 keys in the cache
{ :ok, 100 } = Cachex.size(:my_cache)
100 = Cachex.size(:my_cache)

# trigger a pruning down to 50 keys, reclaiming 10%
{ :ok, true } = Cachex.prune(:my_cache, 50, reclaim: 0.1)
55 = Cachex.prune(:my_cache, 50, reclaim: 0.1)

# verify that we're down to 45 keys
{ :ok, 45 } = Cachex.size(:my_cache)
45 = Cachex.size(:my_cache)
```

It is almost never a good idea to set `reclaim: 0` unless you have very specific use cases, so if you don't it's recommended to leave `:reclaim` at the default value - it was only used above for example purposes.
Expand Down
Loading