Standalone Rust service that renders RMRK equippable NFTs into flat images. SVG-first rendering, deterministic caching, and a minimal admin panel and API.
- Canonical render endpoints with cache-busting via
cache=query param - SVG + PNG/JPG/WebP asset support (SVG rasterized with
resvg) - Deterministic canvas size derived from first fixed part
- Partial renders are not cached
- IPFS gateway rotation + asset caching
- Warmup queue with safe concurrency
- Embedded admin panel (
/admin) with JSON API - Admin-managed fallback overrides for unapproved collections and token-level fixes
- Prometheus
/metricsendpoint with Top-K tracking for IPs/collections/sources
cd proj-renderer
cargo build --release
export ADMIN_PASSWORD="change-me"
export RPC_ENDPOINTS='{"base":["https://mainnet.base.org"]}'
./target/release/proj-rendererSee env.example for a full configuration template.
Health check:
curl http://localhost:8080/healthzAll configuration is done via environment variables. See env.example for possibilities.
Note: outbound HTTP(S) fetches block private/loopback/link-local hosts and do not
follow redirects by default. Use ALLOW_PRIVATE_NETWORKS=true only in trusted
environments.
Render safety caps:
MAX_LAYERS_PER_RENDERlimits total layers processed per render.MAX_CANVAS_PIXELScaps the canvas area (width × height).MAX_TOTAL_RASTER_PIXELScaps total raster pixels across layers.MAX_DECODED_RASTER_PIXELScaps raster decode dimensions before allocation.MAX_RASTER_RESIZE_BYTESallows oversized raster downloads for resize.MAX_RASTER_RESIZE_DIMrescales oversized rasters to fit within a max dimension.MAX_CACHE_VARIANTS_PER_KEYlimits cached timestamps per token/variant (evicts oldest).MAX_OVERLAY_LENGTHandMAX_BG_LENGTHcap query param length.
HTTP safety caps:
MAX_IN_FLIGHT_REQUESTSlimits total concurrent HTTP requests.RATE_LIMIT_PER_MINUTE/RATE_LIMIT_BURSTenable per-IP rate limiting (0 disables).APPROVAL_ON_DEMAND_RATE_LIMIT_PER_MINUTE/APPROVAL_ON_DEMAND_RATE_LIMIT_BURSTthrottle on-demand approval checks for unknown collections (per identity).MAX_ADMIN_BODY_BYTEScaps admin API request bodies.- Asset/metadata fetches resolve DNS once per request and pin the connection to the resolved IPs to reduce DNS rebinding risk.
CACHE_SIZE_REFRESH_SECONDScontrols how often cache size stats are refreshed for/statusand admin dashboard.OUTBOUND_CLIENT_CACHE_TTL_SECONDS/OUTBOUND_CLIENT_CACHE_CAPACITYcache pinned HTTP clients for outbound fetches.CACHE_EVICT_INTERVAL_SECONDSsets how often the cache eviction loop runs (0 disables).MAX_CONCURRENT_RPC_CALLScaps concurrent RPC calls (primary-route lookups + warmup fallbacks).PRIMARY_ASSET_NEGATIVE_TTL_SECONDScaches failed primary-asset lookups briefly to avoid RPC hammering.DEFAULT_CACHE_TTL_SECONDSsets a default HTTP cache TTL whencacheis omitted.
- Child assets render at the slot part’s
z. Slot fallback metadata is only used when no child is equipped. RASTER_MISMATCH_FIXED:error,scale_to_canvas,center_no_scale, ortop_left_no_scale.RASTER_MISMATCH_CHILD: same values asRASTER_MISMATCH_FIXED, applied to equipped child layers.COLLECTION_RENDER_OVERRIDES: JSON map"chain:collection" => { raster_mismatch_fixed, raster_mismatch_child }.
Example (Base ME avatars 0xb30b909c1fa58fd2b0f95eeea3fa0399b6f2382d):
- The skin is a fixed part at z=1, while the background is a slot child at z=0.
- Because child assets render at their slot’s z, the background remains at z=0 and the skin stays visible.
Access control:
ACCESS_MODE:open,key_required,hybrid,denylist_only, orallowlist_only.API_KEY_SECRET: required unlessACCESS_MODE=open.KEY_RATE_LIMIT_PER_MINUTE/KEY_RATE_LIMIT_BURST: default per-key limits (overrides can be set per key).AUTH_FAILURE_RATE_LIMIT_PER_MINUTE/AUTH_FAILURE_RATE_LIMIT_BURST: rate limit for unauthorized requests.USAGE_SAMPLE_RATE: sampling for usage aggregation (lower this in open mode).USAGE_RETENTION_DAYS: retention for hourly usage aggregates (0 disables cleanup).IDENTITY_IP_LABEL_MODE: how IP-derived identities are stored (usage + failure logs).TRACK_KEYS_IN_OPEN_MODE: whenACCESS_MODE=openordenylist_only, skip DB lookups for bearer tokens unless set totrue.- API keys are accepted via
Authorization: Beareronly (query-string keys are not supported).
The renderer exposes a Prometheus endpoint at GET /metrics. It is private by default and
access is granted when any of the following are true:
METRICS_PUBLIC=true- request IP is in
METRICS_ALLOW_IPS - bearer matches
METRICS_BEARER_TOKEN(recommended) - admin bearer auth is presented (
ADMIN_PASSWORD)
Keep METRICS_REQUIRE_ADMIN_KEY=true in production to prevent render allowlisted IPs from
implicitly gaining /metrics access. Use METRICS_ALLOW_IPS and/or a metrics bearer token
for scrapes.
See metrics/README.md for dashboards, non-Docker setup (recommended for production), and
Docker compose (convenience only).
Minimal non-Docker steps:
- Install Prometheus + Grafana (package manager or upstream binaries).
- Configure Prometheus to scrape
http://127.0.0.1:8080/metricsand either:- allowlist
METRICS_ALLOW_IPS=127.0.0.1/32, or - set
METRICS_BEARER_TOKENand use it in the scrape config.
- allowlist
- Add Prometheus as a Grafana datasource and use the panel queries from
metrics/README.md.
Retention note: Prometheus retention is global (applies to all metrics), so a 7‑day cap will
drop all time series beyond that window. See metrics/README.md for the retention flag and options
if you want to keep failures longer or cap disk usage by size.
Security note: docker-compose.metrics.yml binds ports to 127.0.0.1 and disables anonymous
Grafana access by default. Avoid public exposure without an authenticated proxy.
Performance note: METRICS_REFRESH_INTERVAL_SECONDS controls cheap gauges (set 0 to disable),
and METRICS_EXPENSIVE_REFRESH_SECONDS controls disk scans (default 300s).
Source label note: top-source metrics are only recorded for authenticated (client key) requests,
and the source must validate as a hostname (X-Renderer-Source or Origin/Referer).
Unapproved collections are skipped in top-collection metrics to reduce churn.
AccessMode semantics:
open: all requests allowed.key_required: only valid API keys allowed.hybrid: valid API keys always allowed; otherwise deny if an IP rule matchesdeny.denylist_only: deny if API key is inactive or IP rule matchesdeny.allowlist_only: allow if API key is active; otherwise allow only if IP rule matchesallow.
IP rule precedence: longest CIDR prefix wins; on ties, deny beats allow.
On-demand approval checks for unknown collections only run when the request is authenticated with a valid API key or comes from an allowlisted IP.
- SVG parsing must never read local files.
- HTTP fetches must never reach private/loopback/link-local IPs.
overlayandbgare normalized before cache key creation.
REQUIRE_APPROVAL=true
APPROVALS_CONTRACTS={"base":"0xYourRendererApprovalsContract"}
APPROVALS_CONTRACT_CHAIN=base
CHAIN_ID_MAP={"1":"ethereum","56":"bsc","137":"polygon","8453":"base","84532":"base-sepolia","1284":"moonbeam","1285":"moonriver","1287":"moonbase-alpha","31337":"hardhat"}
# See approval section in env.example for moreSet APPROVAL_POLL_INTERVAL_SECONDS=0 to disable approval watchers.
APPROVAL_NEGATIVE_CACHE_SECONDS and APPROVAL_NEGATIVE_CACHE_CAPACITY control
the in-memory negative cache for on-demand approval checks.
If REQUIRE_APPROVAL=true and you accept open traffic, use ACCESS_MODE=key_required
or strict rate limits to prevent on-demand approval checks from becoming an RPC
cost/availability lever.
Include a chain ID entry for every chain you enable.
Set MAX_APPROVAL_STALENESS_SECONDS to force an on-demand recheck when approval
sync is older than the configured window (0 disables the guardrail).
The Solidity contract for approvals (RendererApprovalsV2) is in solidity/RendererApprovals.sol.
It implements a minimal IRendererApprovalPolicy interface so other deployers can supply their
own on-chain policy contract as long as it exposes:
approved(chainId, collection) -> boolapprovedUntil(chainId, collection) -> uint64- optional enumeration:
approvalKeyCount,approvalKeysPage
CHAIN_ID_MAP is required to map approval events to configured chains. Use
APPROVALS_CONTRACT_CHAIN when a single approvals contract is deployed on one chain.
Set APPROVAL_ENUMERATION_ENABLED=false if your approvals contract does not implement
enumeration (the renderer will rely on on-demand checks + events only).
WARMUP_WIDTHS=["medium","large"]
WARMUP_INCLUDE_OG=true
WARMUP_MAX_TOKENS=1000
WARMUP_MAX_RENDERS_PER_JOB=6
WARMUP_JOB_TIMEOUT_SECONDS=600
WARMUP_MAX_BLOCK_SPAN=0WARMUP_MAX_BLOCK_SPAN caps transfer-log block ranges (0 disables the guardrail).
Cacheless requests default to DEFAULT_CACHE_TIMESTAMP=0, which also powers warmup
renders. When cache= is omitted, the renderer prefers a collection cache_epoch
(if set) and falls back to DEFAULT_CACHE_TIMESTAMP. Set
DEFAULT_CACHE_TIMESTAMP=off to disable default caching.
TOKEN_STATE_CHECK_TTL_SECONDS=86400
FRESH_RATE_LIMIT_SECONDS=300
FRESH_REQUEST_RETENTION_DAYS=7TOKEN_STATE_CHECK_TTL_SECONDScontrols how long token state is considered fresh.FRESH_RATE_LIMIT_SECONDSenforces the per-NFT cooldown for?fresh=1.FRESH_REQUEST_RETENTION_DAYSprunes oldfresh=1limiter rows (0 disables cleanup).?fresh=1forces an on-chain state refresh, returnsCache-Control: no-store, and still updates the canonical cache for subsequent non-fresh requests.- Client keys can bypass the fresh limiter by setting
allow_fresh=truein the admin UI.
PINNED_DIRholds all unique IPFS assets discovered in Phase A+B. Plan for growth equal to the total distinct media for your collections (often tens of GB).CACHE_DIRstores rendered outputs and resized variants; allocate 2-4x the total expected pinned asset size if you plan to cache multiple widths/OG renders.- Start with 50-200 GB for mid-sized collections and adjust after observing
/statuscache stats and warmup asset counts.
LANDING_DIR=/opt/renderer/landing
LANDING=index.html
LANDING_STRICT_HEADERS=true
LANDING_PUBLIC=false
STATUS_PUBLIC=false
OPENAPI_PUBLIC=trueWhen enabled, the service will serve LANDING at / and static assets from
LANDING_DIR. Render routes still take priority.
LANDING must be an .html file and this feature is disabled on Windows builds.
Set LANDING_PUBLIC=true to allow the landing page and its static assets to be
served without access gating (render routes remain protected).
LANDING_STRICT_HEADERS=true adds CSP, X-Frame-Options, and Referrer-Policy;
disable it if your landing needs embedding or external assets.
If the landing file is missing, the renderer serves a built-in minimal template with canonical, primary, and HEAD examples.
Landing serves only allowlisted extensions and does not expose directory indexes.
Do not place secrets or sensitive files under LANDING_DIR; any allowlisted file
extension can be served if requested.
Landing does not provide SPA-style fallbacks for deep links (e.g., /docs will not map to index.html).
For best UX, include copy-paste examples for the canonical vs primary route and
note that the primary route is slower (RPC lookup) while canonical is cache-first.
Set STATUS_PUBLIC=true to expose /status and /status.json for a lightweight
status widget (cache size, warmup queue, approvals, access mode). Avoid polling
these endpoints at high frequency.
If STATUS_PUBLIC=false or OPENAPI_PUBLIC=false, those endpoints require an
API key or an allowlisted IP even in ACCESS_MODE=open.
Set OPENAPI_PUBLIC=true to expose /openapi.yaml without access gating.
Static landing templates live under src/templates/<name>. Build one template
into dist/<name>:
bun install
bun run build:landing
bun run build:approvalRun bun run build to build every template folder under src/templates.
The approvals template reads build-time settings from .env:
APPROVALS_CONTRACTS+APPROVALS_CONTRACT_CHAINRPC_ENDPOINTSCHAIN_ID_MAP
Optional overrides:
LANDING_RENDERER_BASE_URL(defaults towindow.location.origin)LANDING_SINGULAR_BASE_URLLANDING_APPROVALS_LIMITLANDING_APPROVALS_PREVIEW_TOKENS
If you point LANDING_RENDERER_BASE_URL at a different origin while using
LANDING_STRICT_HEADERS=true, disable strict headers or host the landing page
on the same origin so CSP allows image loads.
The approvals template performs client-side RPC calls. When serving it through
the renderer, either disable strict headers or expose an RPC proxy on the same
origin so connect-src allows the JSON-RPC requests.
When deploying behind a reverse proxy (nginx/ALB/Cloudflare):
- Set
TRUSTED_PROXY_CIDRSto the proxy’s IP ranges. - Keep
RATE_LIMIT_PER_MINUTE/AUTH_FAILURE_RATE_LIMIT_PER_MINUTEenabled at the proxy and app. - Terminate TLS at the proxy, and forward
X-Forwarded-For/Forwarded. - Avoid overly broad
TRUSTED_PROXY_CIDRSlike0.0.0.0/0unless you fully trust clients. - Configure the proxy to overwrite forwarded headers; the app selects the last untrusted IP in the chain (bounded to 20 entries).
- If you have multiple proxies (e.g., Cloudflare → nginx), include all proxy
CIDRs in
TRUSTED_PROXY_CIDRSor client IP attribution will break.
Put the config in /etc/nginx/sites-available/renderer.rmrk.app, then enable it:
sudo ln -s /etc/nginx/sites-available/renderer.rmrk.app \
/etc/nginx/sites-enabled/renderer.rmrk.app
sudo nginx -t
sudo systemctl reload nginxStart with HTTP only so certbot can validate the domain:
upstream renderer {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name renderer.rmrk.app;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
proxy_pass http://renderer;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Then issue the cert (ensure port 80 is open in your firewall/security group):
sudo mkdir -p /var/www/certbot/.well-known/acme-challenge
sudo certbot certonly --webroot -w /var/www/certbot -d renderer.rmrk.appAfter the cert exists, add HTTPS and redirect HTTP:
server {
listen 80;
server_name renderer.rmrk.app;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name renderer.rmrk.app;
ssl_certificate /etc/letsencrypt/live/renderer.rmrk.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/renderer.rmrk.app/privkey.pem;
client_max_body_size 2m;
# Legacy: /nft/{chainId}/{collection}/{tokenId}?extension=png&img-width=600
location ~ ^/nft/(?<chain_id>[^/]+)/(?<collection>0x[0-9A-Fa-f]+)/(?<token_id>[0-9]+)$ {
set $chain $chain_id;
if ($chain_id = "8453") { set $chain "base"; }
set $format $arg_extension;
if ($format = "") { set $format "png"; }
rewrite ^ /render/$chain/$collection/$token_id/$format break;
proxy_pass http://renderer;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_read_timeout 120s;
proxy_send_timeout 120s;
}
location / {
proxy_pass http://renderer;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_read_timeout 120s;
proxy_send_timeout 120s;
}
}If nginx -t reports no "ssl_certificate" is defined, remove ssl from the
listen 443 ssl line until certbot has created the cert, then re-enable HTTPS.
If you are replacing legacy domains such as composable.rmrk.link and
nft-renderer.rmrk.app, you can keep old URLs working by proxying to the renderer
and rewriting /nft/... to the token-only route. /production/create/... is already
supported by the renderer and does not need a rewrite.
upstream renderer {
server 127.0.0.1:8080;
}
server {
listen 443 ssl;
server_name composable.rmrk.link nft-renderer.rmrk.app;
# Legacy: /nft/{chainId}/{collection}/{tokenId}?extension=png&img-width=600
location ~ ^/nft/(?<chain_id>[^/]+)/(?<collection>0x[0-9A-Fa-f]+)/(?<token_id>[0-9]+)$ {
set $chain $chain_id;
if ($chain_id = "8453") { set $chain "base"; }
set $format $arg_extension;
if ($format = "") { set $format "png"; }
rewrite ^ /render/$chain/$collection/$token_id/$format break;
proxy_pass http://renderer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://renderer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Add more chain_id mappings as needed. If you prefer to keep numeric chain IDs
in the URL, you can instead use numeric keys in RPC_ENDPOINTS and
RENDER_UTILS_ADDRESSES (and drop the chain_id mapping above).
- Put the service behind a reverse proxy for TLS + connection throttling.
- Keep
/adminprivate: IP allowlist, VPN, or additional proxy auth. - Keep rate limits nonzero (even modest).
- Use
ACCESS_MODE=hybridorkey_requiredwith strongAPI_KEY_SECRET. - Leave
STATUS_PUBLIC=falseunless you intentionally expose it.
GET /render/{chain}/{collection}/{tokenId}/{assetId}/{format}
?cache={timestamp}
&width={pixels|preset}
&ogImage=true|false
&overlay=watermark
&bg={hex|transparent}
&onerror=placeholderformat is now a path segment (e.g. /render/.../png), not a file extension.
Legacy dotted routes are still accepted for drop-in compatibility:
GET /render/{chain}/{collection}/{tokenId}/{assetId}.{format}
HEAD /render/{chain}/{collection}/{tokenId}/{assetId}.{format}?cache={timestamp}HEAD /render/{chain}/{collection}/{tokenId}/{assetId}/{format}?cache={timestamp}cache= selects a specific cache epoch. Omit it to use the collection cache
epoch (if set) or DEFAULT_CACHE_TIMESTAMP.
HEAD is supported on cached render routes and returns headers without a body. It
acts as a cache probe and never renders; cache misses return 200 with
X-Renderer-Cache-Hit: false (X-Cache: MISS, X-Renderer-Result: cache-miss)
and Cache-Control: no-store.
GET /render/{chain}/{collection}/{tokenId}/{format}Response includes X-Renderer-Primary-AssetId and Cache-Control: no-store.
HEAD is only supported on canonical asset routes (/render/.../{assetId}/...).
Legacy dotted route:
GET /render/{chain}/{collection}/{tokenId}.{format}GET /production/create/{chain}/{cacheTimestamp}/{collection}/{tokenId}/{assetId}/{format}
?img-width=600&ogImage=trueLegacy dotted route:
GET /production/create/{chain}/{cacheTimestamp}/{collection}/{tokenId}/{assetId}.{format}
?img-width=600&ogImage=trueGET /og/{chain}/{collection}/{tokenId}/{assetId}/{format}?cache={timestamp}Legacy dotted route:
GET /og/{chain}/{collection}/{tokenId}/{assetId}.{format}?cache={timestamp}X-Renderer-Complete: true|falseX-Renderer-Result: rendered|placeholder|cache-miss|fallbackX-Renderer-Cache-Hit: true|false(cached renders and HEAD probes)X-Cache: HIT|MISSX-Renderer-Missing-Layers: <count>(when missing required layers)X-Renderer-Nonconforming-Layers: <count>(when raster sizes mismatch)X-Renderer-Fallback: unapproved|render_fallback|token_override|queued|approval_rate_limitedX-Renderer-Fallback-Source: global|collection|token(disk-backed fallbacks)X-Renderer-Fallback-Reason: approval_required|queue_full|rate_limited(dynamic fallbacks)X-Renderer-Fallback-Action: register_collection|retry|noneX-Renderer-Error-Code: <code>(JSON errors and fallbacks)X-Request-Id: <id>for correlationCache-Control: public, max-age=...when cacheableETagfor conditional GET on cached renders
When onerror=placeholder is set, failed renders return a tiny placeholder image
with X-Renderer-Error: true instead of JSON.
The admin panel HTML is served at /admin (no secrets). The /admin/api/** endpoints require:
Authorization: Bearer <ADMIN_PASSWORD>
Runtime settings (e.g. toggling approval requirements) are exposed at /admin/api/settings.
curl -H "Authorization: Bearer $ADMIN_PASSWORD" \
http://localhost:8080/admin/api/collections
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection_address":"0x...","approved":true}' \
http://localhost:8080/admin/api/collectionscurl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"token_id":"1","asset_id":"100"}' \
http://localhost:8080/admin/api/collections/base/0x.../refresh-canvascurl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"epoch":123}' \
http://localhost:8080/admin/api/collections/base/0x.../cache-epochOmit epoch to auto-bump by 1.
- Approve the collection (if approvals are required).
- Phase A: catalog warmup (pins shared assets).
- Phase B: token scan warmup (pins token-specific assets).
- Phase C: render warmup (optional pre-render of thumbnails/OG).
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","token_id":"1","asset_id":"100"}' \
http://localhost:8080/admin/api/warmup/catalogcurl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","start_token":1,"end_token":100}' \
http://localhost:8080/admin/api/warmup/tokens
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","token_ids":["1","2","3"]}' \
http://localhost:8080/admin/api/warmup/tokens/manualRender warmup uses the normal render pipeline (pinned assets + token state cache).
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{
"chain":"base",
"collection":"0x...",
"token_ids":["1","2","3"],
"widths":["medium","large"],
"include_og":true,
"cache_timestamp":"1700000000000"
}' \
http://localhost:8080/admin/api/warmupcurl -H "Authorization: Bearer $ADMIN_PASSWORD" \
"http://localhost:8080/admin/api/warmup/status?chain=base&collection=0x..."curl -H "Authorization: Bearer $ADMIN_PASSWORD" \
http://localhost:8080/admin/api/warmup/jobs?limit=100
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
http://localhost:8080/admin/api/warmup/jobs/123/cancel# Purge renders for a collection
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x..."}' \
http://localhost:8080/admin/api/cache/purge
# Purge everything (renders + assets + overlays)
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"include_assets":true}' \
http://localhost:8080/admin/api/cache/purgePer-collection overlays can be configured in the admin table:
og_overlay_urifor OG modewatermark_overlay_uriforoverlay=watermark
Note: only overlay=watermark is supported in this MVP.
Supported schemes:
ipfs://...https://...local://filename.svg(resolved relative toCACHE_DIR/overlays/)
The admin API supports disk-backed fallback/override images for:
- Global unapproved collections
- Per-collection unapproved and render-failure fallbacks
- Per-token overrides (
chain + collection + token_id)
Images are processed on upload (size limits + re-encoding), stored under FALLBACKS_DIR,
and served directly from disk with consistent ETag + cache headers. Authorized clients
can still bypass fallbacks with ?debug=1/?raw=1 to see JSON errors.
If no unapproved fallback is uploaded, the renderer returns a generated CTA image; the
two CTA lines are configurable via the admin settings (line 1 + line 2, often a URL).
These inputs are intentionally unvalidated and treated as trusted admin content to maximize
conversion control; this is an accepted risk, so protect admin access accordingly.
Fallbacks are not cache. Keep FALLBACKS_DIR outside CACHE_DIR (default:
/var/lib/renderer/fallbacks). Cache purge operations only remove cache subdirectories.
Token override lookups are cached in memory; tune with
TOKEN_OVERRIDE_CACHE_TTL_SECONDS and TOKEN_OVERRIDE_CACHE_CAPACITY.
See spec-docs/RENDERER_SPEC_v1.2_UPDATED.md for full behavior and endpoints.
Admin API examples:
- Upload global unapproved fallback:
curl -X POST -H "Authorization: Bearer <admin>" -F file=@fallback.png http://127.0.0.1:8080/admin/api/fallbacks/unapproved - Upload per-collection unapproved fallback:
curl -X POST -H "Authorization: Bearer <admin>" -F file=@fallback.png http://127.0.0.1:8080/admin/api/collections/<chain>/<collection>/fallbacks/unapproved - Upload per-collection render fallback:
curl -X POST -H "Authorization: Bearer <admin>" -F file=@fallback.png http://127.0.0.1:8080/admin/api/collections/<chain>/<collection>/fallbacks/render - Upload token override:
curl -X POST -H "Authorization: Bearer <admin>" -F file=@fallback.png http://127.0.0.1:8080/admin/api/collections/<chain>/<collection>/overrides/<token_id>
cargo build --releasecargo testset -a
source .env
set +a
# Terminal 1
cargo run# Terminal 2 (warmup A + B + optional C)
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","token_id":"1","asset_id":"100"}' \
http://localhost:8085/admin/api/warmup/catalog
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","start_token":1,"end_token":50}' \
http://localhost:8085/admin/api/warmup/tokens
curl -X POST -H "Authorization: Bearer $ADMIN_PASSWORD" \
-H "Content-Type: application/json" \
-d '{"chain":"base","collection":"0x...","token_ids":["1","2","3"],"widths":["medium"],"cache_timestamp":"1700000000000"}' \
http://localhost:8085/admin/api/warmup# Terminal 3 (simulate a marketplace grid)
bun run scripts/marketplace-sim.ts \
--base-url http://127.0.0.1:8085 \
--chain base \
--collection 0x... \
--start 1 \
--count 100 \
--concurrency 20 \
--width medium# Terminal 4 (capture rendered outputs)
bun run scripts/render-output.ts \
--base-url http://127.0.0.1:8085 \
--chain base \
--collection 0x... \
--start 1 \
--count 100 \
--output-dir ./pinned-testXX/outputs \
--width 512 \
--format pngcargo fmt --all -- --check
cargo clippy --all-targets --all-features -- -D warnings
cargo test
cargo audit- Optional:
cargo deny check
[Unit]
Description=RMRK Renderer
After=network.target
[Service]
Type=simple
User=renderer
Group=renderer
WorkingDirectory=/opt/renderer
EnvironmentFile=/opt/renderer/.env
ExecStart=/opt/renderer/renderer
Restart=on-failure
RestartSec=2
LimitNOFILE=65535
[Install]
WantedBy=multi-user.targetReplace renderer with your service user (e.g. bitfalls) and adjust paths to match your install.
After creating or updating the unit and env file:
sudo mkdir -p /var/lib/renderer /var/cache/renderer
sudo chown -R renderer:renderer /var/lib/renderer /var/cache/renderer
sudo systemctl daemon-reload
sudo systemctl enable --now renderer
sudo systemctl status rendererIf you update /opt/renderer/.env or swap the binary, restart the service (don't forget to +x chmod a new binary):
sudo systemctl restart renderer
sudo journalctl -u renderer -fThen validate and reload nginx (renderer first, nginx second):
sudo nginx -t
sudo systemctl reload nginxQuick sanity checks:
curl -I http://127.0.0.1:8080/
# If you expose it: curl -I http://127.0.0.1:8080/statusset -a
source .env
set +a
cargo run
# or ./target/release/proj-renderer if compiledPut a CDN or reverse proxy (e.g., Cloudflare or Nginx) in front if desired.
Cache control is safe because cache busting is URL-driven via the cache= parameter.
- Failure responses (4xx/5xx) are logged as JSON lines to
FAILURE_LOG_PATH. - Set
FAILURE_LOG_PATH=offto disable logging. FAILURE_LOG_MAX_BYTEScaps file size (oldest entries are truncated).FAILURE_LOG_CHANNEL_CAPACITYbounds log bursts (entries are dropped when full).- IPs are hashed by default via
IDENTITY_IP_LABEL_MODE. - By default, only
5xxplus401/403/429are logged to reduce 404 spam.
/statusand/admin/api/warmup/statusinclude queued/running/done/failed counts.- If warmups stop progressing, check pause state and resume via
/admin/api/warmup/resume. - Use
/admin/api/warmup/jobsand/admin/api/warmup/jobs/{id}/cancelto inspect or stop jobs.
- Use the Admin UI → “Hash Replacements” to upload a static image for a CID that is missing or unpinned.
- The uploaded image is returned as-is (no resizing) whenever that CID is requested.
- Files are stored under
PINNED_DIR/hash-replacements/.
- Canvas size is derived from the first fixed part’s art. If SVG sizing is invalid, defaults are used and the collection should be reviewed.
- Raster layers that do not match the canonical canvas size are treated as nonconforming.
- Non-composable primary assets fall back to a single-layer render using asset metadata.
- Original-size fallback renders are not cached; resized/OG variants are.
- Fallback widths snap to preset buckets (64/128/256/512/1024/2048); numeric widths round to nearest.
- If a raster asset exceeds size limits, the renderer attempts a resize; if it still fails and
thumbnailUriexists, the thumbnail is used. - Usage identity keys for non-API requests include a hashed IP by default (see
IDENTITY_IP_LABEL_MODE); ensureTRUSTED_PROXY_CIDRSis set when proxying. - Failure responses (4xx/5xx) are logged as JSON lines to
FAILURE_LOG_PATH(default/var/lib/renderer/logs/renderer-failures.log) and capped byFAILURE_LOG_MAX_BYTES(setFAILURE_LOG_PATH=offto disable). By default, only5xxplus401/403/429are logged. UseFAILURE_LOG_CHANNEL_CAPACITYto bound bursts. ?fresh=1forces a state refresh and returnsCache-Control: no-store. If rate-limited, expect 429 withRetry-After.- Oversized raster assets are fetched with a higher byte cap and resized to
MAX_RASTER_RESIZE_DIMduring pinning/asset fetch. - Token warmup skips invalid/empty asset URIs (logged) so jobs can complete.
- Relative asset URIs are resolved against the metadata URI;
ar://is normalized tohttps://arweave.net/. - HTTP gateway URLs with
/ipfs/<cid>are normalized toipfs://so gateway rotation can recover from flaky gateways. - Warmup renders only cache when a
cache_timestampis provided. - See
PRODUCTION.mdfor a deployment checklist andopenapi.yamlfor a minimal API spec. *_PUBLICflags bypass access gating only; they do not disable routes entirely.- Metrics: see
metrics/README.mdfor Prometheus/Grafana setup and panel queries. - Fallback overrides are served from
FALLBACKS_DIRand can replace unapproved/failed renders.
- Ensure writable paths:
CACHE_DIR,FALLBACKS_DIR,DB_PATHdirectory, andFAILURE_LOG_PATH(or setFAILURE_LOG_PATH=off). - If
ACCESS_MODE=openorhybrid, set nonzeroRATE_LIMIT_PER_MINUTEand burst. - Set Prometheus retention size cap (
--storage.tsdb.retention.size) in addition to time.
- Local dev:
ACCESS_MODE=open,REQUIRE_APPROVAL=false, permissive limits. - Staging:
ACCESS_MODE=key_required,OPENAPI_PUBLIC=false, moderate limits. - Prod: approvals on, key or allowlist mode, strict limits.
cargo fmt --checkcargo clippycargo testcargo audit(orcargo deny) on a schedule
TRUSTED_PROXY_CIDRStoo broad lets clients spoof IPs (rate limiting/denylist bypass).ALLOW_PRIVATE_NETWORKS=trueenables internal SSRF paths; use only in trusted networks.ALLOW_HTTP=trueweakens transport safety; keep off in production.