A Redis-compatible in-memory key-value store written in Rust.
kvns speaks RESP and works with redis-cli and Redis client libraries.
This project is an experiment and in part uses code generated with models from Anthropic.
- RESP2 server with RESP3 handshake support via
HELLO 3 - Redis-like command surface across strings, lists, hashes, sets, and sorted sets
- Key namespacing via
namespace/keysyntax - TTL/expiry management (
EXPIRE*,PEXPIRE*,PERSIST,EXPIRETIME,PEXPIRETIME) - Configurable memory limit with OOM rejection or namespace-scoped eviction (
lru,mru) - Memory-limit guardrails when configured (
KVNS_MEMORY_LIMITis capped at 70% of host RAM) - Optional on-disk persistence with periodic flush and shutdown flush
- Prometheus metrics endpoint with per-namespace labels
- Structured logs via
tracing
Keys may include a namespace prefix separated by /:
namespace/localkey
SET db1/x 42stores keyxin namespacedb1GET db1/xreads keyxfrom namespacedb1- Keys with no
/are stored in thedefaultnamespace
Only the first / is treated as the separator, so local keys may also contain / (for example SET ns/a/b value uses namespace ns and key a/b).
Namespaces are isolated: db1/x and db2/x are different keys with independent memory/accounting metrics.
Command names are case-insensitive.
| Family | Commands |
|---|---|
| Connection | PING, QUIT, HELLO, RESET, SELECT |
| String | SET, GET, MGET, MSET, MSETNX, SETNX, GETSET, GETDEL, GETEX, APPEND, STRLEN, INCR, INCRBY, DECR, DECRBY, INCRBYFLOAT, SETRANGE, GETRANGE, SUBSTR |
| List | LPUSH, RPUSH, LPUSHX, RPUSHX, LPOP, RPOP, LLEN, LRANGE, LINDEX, LSET, LREM, LTRIM, LINSERT, LPOS, LMOVE |
| Hash | HSET, HMSET, HGET, HDEL, HEXISTS, HGETALL, HKEYS, HVALS, HLEN, HMGET, HINCRBY, HINCRBYFLOAT, HRANDFIELD |
| Set | SADD, SREM, SMEMBERS, SCARD, SISMEMBER, SMISMEMBER, SUNION, SINTER, SDIFF, SUNIONSTORE, SINTERSTORE, SDIFFSTORE, SMOVE, SPOP, SRANDMEMBER |
| Sorted set | ZADD, ZRANGE, ZRANGEBYSCORE, ZREVRANGEBYSCORE, ZREVRANGE, ZRANK, ZREVRANK, ZSCORE, ZMSCORE, ZREM, ZCARD, ZCOUNT, ZINCRBY, ZRANGEBYLEX, ZLEXCOUNT, ZREMRANGEBYRANK, ZREMRANGEBYSCORE, ZREMRANGEBYLEX, ZPOPMIN, ZPOPMAX, ZRANDMEMBER |
| Generic/keyspace | DEL, UNLINK, EXISTS, TYPE, TTL, PTTL, EXPIRE, EXPIREAT, PEXPIRE, PEXPIREAT, PERSIST, EXPIRETIME, PEXPIRETIME, RENAME, RENAMENX, SCAN, KEYS, TOUCH, COPY, OBJECT |
| Pub/Sub | SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH |
| Server/introspection | DBSIZE, FLUSHDB, FLUSHALL, INFO, CONFIG, COMMAND, CLIENT, LATENCY, SLOWLOG, DEBUG, WAIT, XADD |
Compatibility notes:
HMSETis accepted as an alias forHSET.SUBSTRis accepted as an alias forGETRANGE.XADDcurrently returnsERR stream type not supported.- Some server/introspection subcommands are compatibility shims and return static or empty responses.
SELECTonly supports database index0.
TTL and expiry return semantics match Redis-style integer responses:
TTL/PTTLreturn-2for missing keysTTL/PTTLreturn-1for keys without expiryEXPIRE,EXPIREAT,PEXPIRE, andPEXPIREATsupportNX,XX,GT,LT
| Pattern | Matches |
|---|---|
* |
Any sequence of characters (including none) |
? |
Exactly one character |
[ae] |
One of the listed characters (a or e) |
[^e] / [!e] |
Any character except those listed |
[a-z] |
Any character in the range |
Examples:
KEYS * # all keys in all namespaces
KEYS ns/* # all keys in namespace "ns"
SCAN 0 MATCH ns/* COUNT 50
KEYS h[ae]llo # hello or halloDevelopment run:
cargo runRelease build and run:
cargo build --release
./target/release/kvnsOr use the Makefile:
make run
make build
make releaseBuild a local image:
make podman-build IMAGE=kvns:localRun directly:
make podman-run IMAGE=kvns:localUse podman compose with docker-compose.yaml:
make podman-compose-up
make podman-compose-logs
make podman-compose-downPush a multi-arch image to GHCR:
make podman-login-ghcr GHCR_USER="<github-user>" GHCR_TOKEN="<github-token>"
make podman-push-ghcr GHCR_IMAGE="ghcr.io/<owner>/<repo>" TAG="v0.3.0"Notes:
podman-push-ghcrbuilds and pusheslinux/amd64,linux/arm64by default- Set
PLATFORMSto override target platforms - Set
PUSH_LATEST=falseto skip publishinglatest
All settings are read from environment variables at startup.
| Variable | Default | Description |
|---|---|---|
KVNS_LOG |
info |
Log level filter (e.g. debug, warn, error, or kvns=debug) |
KVNS_HOST |
0.0.0.0 |
Interface to listen on |
KVNS_PORT |
6480 |
RESP listener port |
KVNS_MEMORY_LIMIT |
1073741824 |
Max memory in bytes (1 GiB). When set, kvns caps it at 70% of detected host RAM |
KVNS_METRICS_HOST |
0.0.0.0 |
Metrics listener host |
KVNS_METRICS_PORT |
9090 |
Metrics listener port |
KVNS_PERSIST_PATH |
(unset) | Persistence file path; persistence is disabled if unset |
KVNS_PERSIST_INTERVAL |
300 |
Seconds between automatic flushes |
KVNS_SHARED_VALUES |
true |
When true, entry values are stored via Arc so persistence snapshots clone pointers instead of payload bytes. See Value sharing. |
KVNS_EVICTION_POLICY |
none |
Global eviction policy: lru, mru, or none |
KVNS_EVICTION_THRESHOLD |
1.0 |
Fraction of memory limit (0.0-1.0) at which eviction starts |
KVNS_NS_EVICTION |
(unset) | Per-namespace policy overrides, e.g. ns1:lru,ns2:mru |
KVNS_SHARDED_MODE |
false |
Enable experimental sharded lock backend (currently supports PING, QUIT, SET, GET, MGET, MSET, MSETNX, SETNX, INCR, INCRBY, DECR, DECRBY) |
KVNS_SHARD_COUNT |
4 * CPU cores |
Number of lock shards when KVNS_SHARDED_MODE=true |
KVNS_MAX_CLIENTS |
10000 |
Maximum concurrent client connections accepted |
KVNS_MAX_RESP_ARGS |
1024 |
Maximum number of arguments/elements accepted in one RESP command |
KVNS_MAX_RESP_BULK_LEN |
16777216 |
Maximum bytes allowed for a single RESP bulk string |
KVNS_MAX_RESP_INLINE_LEN |
65536 |
Maximum bytes allowed for a RESP inline/header line |
Examples:
# Custom port + memory limit
KVNS_PORT=6379 KVNS_MEMORY_LIMIT=536870912 cargo run
# Enable persistence
KVNS_PERSIST_PATH=/var/lib/kvns/db.rkyv KVNS_PERSIST_INTERVAL=60 cargo run
# Enable LRU eviction at 80% memory usage
KVNS_EVICTION_POLICY=lru KVNS_EVICTION_THRESHOLD=0.8 cargo run
# Override one namespace to MRU while global policy is LRU
KVNS_EVICTION_POLICY=lru KVNS_NS_EVICTION=cache:mru cargo run
# Run the experimental sharded backend
KVNS_SHARDED_MODE=true KVNS_SHARD_COUNT=64 cargo runSharded mode notes:
KVNS_SHARDED_MODEis experimental and currently optimized for throughput-oriented string workloads.- Under concurrent writers, multi-key command atomicity may differ from classic mode.
Memory limit behavior:
- If
KVNS_MEMORY_LIMITis unset, kvns uses the default1073741824bytes (1 GiB) - If
KVNS_MEMORY_LIMITis set above 70% of detected host RAM, kvns clamps it to that 70% cap - If
KVNS_MEMORY_LIMIT=0, kvns uses the same 70% cap directly - If host memory cannot be detected, kvns keeps the configured value;
0falls back to the 1 GiB default
When a write would exceed the effective KVNS_MEMORY_LIMIT, kvns either rejects it with OOM or evicts keys depending on policy.
| Policy | Description |
|---|---|
none |
No eviction (default). Writes beyond limit return an error. |
lru |
Evict lowest-hit keys first. |
mru |
Evict highest-hit keys first. |
ear |
Expire-after-read: keys are deleted on the next background sweep after being read. Aliases: expire_after_read, expireafterread. |
KVNS_EVICTION_THRESHOLD controls when eviction begins. With 1.0 (default), eviction starts only at full configured capacity. Lower values (for example 0.8) start eviction earlier.
KVNS_NS_EVICTION supports comma-separated namespace:policy overrides. Eviction is namespace-scoped: a write in one namespace never evicts keys from another namespace.
When KVNS_PERSIST_PATH is set, kvns periodically snapshots the full store to disk using rkyv. Writes are atomic: data is written to a temporary file (*.tmp) and then renamed into place.
- Persistence is opt-in (unset
KVNS_PERSIST_PATHfor in-memory only) - Flush interval is controlled by
KVNS_PERSIST_INTERVAL - On startup, persisted data is loaded if present
- Expired entries are dropped during load
- On clean shutdown (
SIGINT/SIGTERM), kvns flushes immediately - Parent directories for
KVNS_PERSIST_PATHare created automatically
The periodic persistence flush needs a consistent snapshot of the store. By default (KVNS_SHARED_VALUES=true), entry values are held via Arc so the snapshot step is a pointer-bump rather than a deep byte copy. In exchange each write allocates a small Arc header.
KVNS_SHARED_VALUES |
Write throughput | Max latency during snapshot |
|---|---|---|
true (default) |
~3–5% lower | Roughly flat regardless of store size |
false |
Baseline | Spikes proportional to store size while the snapshot clones |
Set KVNS_SHARED_VALUES=false when persistence is disabled or when raw write throughput matters more than snapshot-time tail latency. The flag is read once at startup; changing it requires a restart.
kvns exposes Prometheus metrics at http://<KVNS_METRICS_HOST>:<KVNS_METRICS_PORT>/metrics.
| Metric | Type | Labels | Description |
|---|---|---|---|
kvns_keys_total |
Gauge | namespace |
Current live key count per namespace |
kvns_memory_used_bytes |
Gauge | namespace |
Current memory used per namespace |
kvns_memory_used_bytes_total |
Gauge | - | Total memory used across all namespaces |
kvns_memory_limit_bytes |
Gauge | - | Configured memory limit |
kvns_command_duration_seconds |
Histogram | command, namespace |
Command latency histogram (currently instrumented for SET) |
kvns_evictions_total |
Counter | namespace |
Total keys evicted per namespace |
kvns_ear_evictions_total |
Counter | namespace |
Total keys deleted by the ExpireAfterRead background sweep |
Per-namespace gauges are created on first write and are set to 0 when the last key in a namespace is removed.
# Start kvns with persistence enabled
KVNS_PERSIST_PATH=kvns.db cargo run &
# Write namespaced and default keys
redis-cli -p 6480 SET db1/x 42
redis-cli -p 6480 SET db2/x 99
redis-cli -p 6480 SET counter 0
redis-cli -p 6480 INCR counter
# Query data
redis-cli -p 6480 GET db1/x
redis-cli -p 6480 KEYS "db*/*"
redis-cli -p 6480 SCAN 0 MATCH "db*/*" COUNT 100
# Inspect metrics
curl -s http://localhost:9090/metrics | grep -E '^kvns_(memory|keys|evictions)'cargo test
make lint
make fmt-checkRun benchmark profiles against kvns:
make benchmarkRun the same benchmark suite against kvns experimental sharded backend:
make benchmark-shardedRun classic and sharded back-to-back and print a direct speedup table:
make benchmark-compareNotes:
- Output artifacts are written under
/tmp/kvns-bench-*(orBENCH_DIRif set)
| Machine | Apple M4 (10-core) |
| RAM | 16 GiB |
| OS | macOS 26.3 |
| Profile length | 60 s per memtier profile |
| Bench command | ./scripts/benchmark_kvns.sh |
Numbers are single-run samples on a development laptop and move ±10% run-to-run with thermal/load state; treat them as order-of-magnitude.
| Metric | Classic | Sharded | Sharded / Classic |
|---|---|---|---|
| Direct SET ops/sec | 137,392 | 164,079 | 1.19x |
| Direct GET ops/sec | 163,713 | 164,336 | 1.00x |
| Pipeline SET ops/sec | 432,061 | 2,307,804 | 5.34x |
| Pipeline GET ops/sec | 1,434,068 | 3,191,423 | 2.23x |
| Direct SET avg ms | 1.164 | 0.975 | 1.19x |
| Direct GET avg ms | 0.977 | 0.973 | 1.00x |
| Pipeline SET avg ms | 16.660 | 3.106 | 5.36x |
| Pipeline GET avg ms | 5.014 | 2.242 | 2.24x |
KVNS_SHARED_VALUES (see Value sharing) controls whether entry values are held via Arc so persistence snapshots clone pointers rather than bytes. Throughput impact is small; the payoff is in tail latency during a snapshot.
Classic mode, memtier direct profile (-c 20 -t 8 -d 256 --test-time=60 --ratio 1:0 / 0:1):
| Config | Direct SET ops/sec | Direct GET ops/sec | Pipe SET ops/sec | Pipe GET ops/sec |
|---|---|---|---|---|
| pre-changes baseline | 137,011 | 165,179 | 446,548 | 1,541,850 |
KVNS_SHARED_VALUES=false |
136,890 | 165,472 | 458,047 | 1,484,701 |
KVNS_SHARED_VALUES=true (default) |
137,392 | 163,713 | 432,061 | 1,434,068 |
Persist-stall scenario (200k × 256-byte keys pre-loaded, KVNS_PERSIST_INTERVAL=2, -c 50 -P 1, so individual request stalls aren't masked by pipelining):
| Config | SET ops/sec | p50 ms | p99 ms | max ms |
|---|---|---|---|---|
KVNS_SHARED_VALUES=false |
101,368 | 0.26 | 0.32 | 16.18 |
KVNS_SHARED_VALUES=true |
101,420 | 0.26 | 0.32 | 6.94 |
Same throughput either way; max latency during a snapshot flush drops by ~58% with shared values because the persist task deep-copies Arc refcounts instead of payload bytes.
Key takeaways:
- Sharded mode delivers a ~5x pipeline-SET and ~2x pipeline-GET speedup over classic, with modest gains on direct (non-pipelined) workloads.
KVNS_SHARED_VALUES=truecosts about 0–3% direct throughput but cuts persist-snapshot max latency by more than half.