Easy interprocess communication.
What would it take to make IPC easier and more robust and more fun?
- Reading and writing processes come and go... so message channels should outlast them
- Machines crash... so channels should persist on disk
- Disks are finite... so channels should be bounded in size
- Message brokers bring complexity and ceremony... so for local IPC, don't require a broker
- Observability is crucial... so messages must be inspectable
- Schemas are great... but schemas should be optional
- Latency matters... so IPC should be fast, zero-copy wherever possible
So, there's Plasmite.
| Alice's terminal | Bob's terminal |
|---|---|
# Alice creates a channel
pls pool create my-channel |
|
# Bob starts reading
pls follow my-channel |
|
# Alice writes a message
pls feed my-channel \
'{"from": "alice",
"msg": "hello world"}' |
|
# Bob sees it arrive
{ "data": {"from": "alice", "msg": "hello world"}, ... } |
Plasmite is a CLI and library suite (Rust, Python, Go, Node, C) for sending and receiving JSON messages through persistent, disk-backed channels called "pools", which are ring buffers. There's no daemon, no broker, and no fancy config required, and it's quick (~600k msg/sec on a laptop).
For IPC across machines, pls serve exposes your local pools securely, and serves a minimal web UI too.
| Drawbacks | Plasmite | |
|---|---|---|
Log files / tail -f |
Unstructured, grow forever, no sequence numbers, fragile parsing | Structured JSON with sequence numbers. Filter with tags or jq. Disk usage stays bounded. |
| Temp files + locks | No streaming, easy to corrupt, readers block writers | Writers append concurrently, readers stream in real time. No corruption, no contention. |
| Redis / NATS | Another server to run and monitor; overkill for single-host messaging | Just files on disk — no daemon, no ports, no config. If you only need local or host-adjacent messaging, don't introduce a broker. |
| SQLite as a queue | Polling-based, write contention, schema and vacuuming are on you | Built for message streams: follow/replay, concurrent writers, no schema, no cleanup, no polling. |
| Named pipes | One reader at a time, writers block, nothing persists | Readers and writers come and go freely. Messages survive restarts. |
| Unix domain sockets | Stream-oriented, no message framing, no persistence, one-to-one | Message boundaries and sequence numbers built in. Fan-out to any number of readers. |
| Poll a directory | Busy loops, no ordering, files accumulate forever | Messages stream in order. The ring buffer won't fill your disk. |
| Shared memory | No persistence, painful to coordinate, binary formats | Human-readable JSON on disk, zero-copy reads, no coordination pain. |
| ZeroMQ | No persistence, complex pattern zoo, binary protocol, library in every process | Durable and human-readable by default. One CLI command or library call to get started. |
| Language-specific queue libs | Tied to one runtime; no CLI, no cross-language story | Consistent CLI + multi-language bindings (Rust, Python, Go, Node, C) + versioned on-disk format. An ecosystem, not a single-language helper. |
Your build script writes progress to a pool. In another terminal, you follow it in real time.
pls feed build --create '{"step": "compile", "status": "done"}'
pls feed build '{"step": "test", "status": "running"}'
# elsewhere:
pls follow buildYour deploy script waits for the test runner to say "green" — no polling loops, no lock files, no shared database.
# deploy.sh
pls follow ci --where '.data.status == "green"' --one > /dev/null && ./deploy-to-staging.sh
# test-runner.sh
pls feed ci --create '{"status": "green", "commit": "abc123"}'Pipe your system logs into a bounded pool. It won't fill your disk, and you can replay anything later.
journalctl -o json-seq -f | pls feed syslog --create # Linux
pls follow syslog --since 30m --replay 1 # replay last 30 minTag events when you write them, then filter and replay on the read side.
pls feed incidents --create --tag sev1 '{"msg": "payment gateway timeout"}'
pls follow incidents --tag sev1 --where '.data.msg | test("timeout")'
pls follow incidents --since 1h --replay 10Start a server and your pools are available over HTTP. Clients use the same CLI — just pass a URL.
pls serve # loopback-only by default
pls serve init # bootstrap TLS + token for LAN access
pls feed http://server:9700/events '{"sensor": "temp", "value": 23.5}'
pls follow http://server:9700/events --tail 20A built-in web UI lives at /ui:
For CORS, auth, and deployment details, see Serving & remote access and the remote protocol spec.
More examples — polyglot producer/consumer, multi-writer event bus, API stream ingest, CORS setup — in the Cookbook.
Plasmite is designed for single-host and host-adjacent messaging. If you need multi-host cluster replication, schema registries, or workflow orchestration, see When Plasmite Isn't the Right Fit.
brew install sandover/tap/plasmiteInstalls the CLI (plasmite + pls) and the full SDK (libplasmite, C header, pkg-config). Go bindings link against this SDK, so install Homebrew first if you're using Go.
cargo install plasmite # CLI only (plasmite + pls)
cargo add plasmite # use as a library in your Rust projectuv tool install plasmite # standalone CLI + Python bindings
uv add plasmite # add to an existing uv-managed projectThe wheel includes pre-built native bindings.
npm i -g plasmiteThe package includes pre-built native bindings.
go get github.com/sandover/plasmite/bindings/go/localBindings only (no CLI). Links against libplasmite via cgo, so you'll need the SDK on your system first — via Homebrew on macOS, or from a GitHub Releases tarball on Linux.
Tarballs for Linux and macOS are on GitHub Releases. Each archive contains bin/, lib/, include/, and lib/pkgconfig/.
Windows builds (x86_64-pc-windows-msvc) are available via npm and PyPI. See the distribution docs for the full install matrix.
| Command | What it does |
|---|---|
feed POOL DATA |
Send a message (--create to auto-create the pool) |
follow POOL |
Follow messages (--create auto-creates missing local pools) |
fetch POOL SEQ |
Fetch one message by sequence number |
pool create NAME |
Create a pool (--size 8M for larger) |
pool list |
List pools |
pool info NAME |
Show pool metadata and metrics |
pool delete NAME... |
Delete one or more pools |
doctor POOL | --all |
Validate pool integrity |
serve |
HTTP server (loopback default; non-loopback opt-in) |
pls and plasmite are the same binary. Shell completion: plasmite completion bash|zsh|fish.
Remote pools support read and write; --create is local-only.
For scripting, use --json with pool create, pool list, pool delete, doctor, and serve check.
A pool is a single .plasmite file containing a persistent ring buffer:
- Multiple writers append concurrently (serialized via OS file locks)
- Multiple readers follow concurrently (lock-free, zero-copy)
- Bounded retention — old messages overwritten when full (default 1 MB, configurable)
- Crash-safe — processes crash and restart; torn writes never propagate
Every message carries a seq (monotonic), a time (nanosecond precision), optional tags, and your JSON data. Tags and --where (jq predicates) compose for filtering. See the CLI spec § pattern matching.
Default pool directory: ~/.plasmite/pools/.
| Metric | |
|---|---|
| Append throughput | ~600k msg/sec (single writer, M3 MacBook) |
| Read | Lock-free, zero-copy via mmap |
| On-disk format | Lite3 (zero-copy, JSON-compatible binary); field access without deserialization |
| Message overhead (framing) | 72-79 bytes per message (64B header + 8B commit marker + alignment) |
| Default pool size | 1 MB |
How reads work: The pool file is memory-mapped. Readers walk committed frames directly from the mapped region — no read syscalls, no buffer copies. Payloads are stored in Lite3, a zero-copy binary format that is byte-for-byte JSON-compatible — every valid JSON document has an equivalent Lite3 representation and vice versa. Lite3 supports field lookup by offset, so tag filtering and --where predicates run without deserializing the full message. JSON conversion happens only at the output boundary.
How writes work: Writers acquire an OS file lock, plan frame placement (including ring wrap), write the frame as Writing, then flip it to Committed and update the header. The lock is held only for the memcpy + header update — no allocation or encoding happens under the lock.
How lookups work: Each pool includes an inline index — a fixed-size hash table mapping sequence numbers to byte offsets. fetch POOL 42 usually jumps directly to the right frame. If the slot is stale or collided, the reader scans forward from the tail. You can tune this with --index-capacity at pool creation time.
Algorithmic complexity below uses N = visible messages in the pool (depends on message sizes and pool capacity), M = index slot count.
| Operation | Complexity | Notes |
|---|---|---|
| Append | O(1) + O(payload bytes) | Writes one frame, updates one index slot, publishes the header. durability=flush adds OS flush cost. |
Get by seq (fetch POOL SEQ) |
Usually O(1); O(N) worst case | If the index slot matches, it's a direct jump. If the slot is overwritten/stale/invalid (or M=0), it scans forward from the tail until it finds (or passes) the target seq. |
Tail / follow (follow, export --tail) |
O(k) to emit k; then O(1)/message | Steady-state work is per message. Tag filters are cheap; --where runs a jq predicate per message. |
Export range (export --from/--to) |
O(R) | Linear in the number of exported messages. |
Validate (doctor, pool info warnings) |
O(N) | Full ring scan. Index checks are sampled/best-effort diagnostics. |
Native bindings:
client, _ := plasmite.NewClient("./data")
pool, _ := client.CreatePool(plasmite.PoolRefName("events"), 1024*1024)
pool.Append(map[string]any{"sensor": "temp", "value": 23.5}, nil, plasmite.DurabilityFast)from plasmite import Client, Durability
client = Client("./data")
pool = client.create_pool("events", 1024*1024)
pool.append_json(b'{"sensor": "temp", "value": 23.5}', [], Durability.FAST)const { Client, Durability } = require("plasmite")
const client = new Client("./data")
const pool = client.createPool("events", 1024 * 1024)
pool.appendJson(Buffer.from('{"sensor": "temp", "value": 23.5}'), [], Durability.Fast)See Go bindings, Python bindings, and Node bindings.
Specs: CLI | API | Remote protocol
Guides: Serving & remote access | Distribution
Contributing: See AGENTS.md for CI hygiene; docs/record/releasing.md for release process
Changelog | Inspired by Oblong Industries' Plasma.
MIT. See THIRD_PARTY_NOTICES.md for vendored code.
