Timebox is a small, opinionated event sourcing library for Go with pluggable persistence backends including memory, Redis/Valkey, PostgreSQL, and Raft. It provides an append-only event log, optimistic concurrency, snapshotting, and append-time indexing so multiple instances can coordinate through the same store.
Timebox currently ships with:
memoryfor tests and single-process useredisfor Redis or Valkey deploymentspostgresfor PostgreSQL-backed persistenceraftfor multi-node consensus
Store: event-store semantics over aPersistenceExecutor: loads aggregate state, runs a command, persists raised events, and retries on optimistic conflictsAggregator: accumulates events and exposes the current aggregate view during a commandIndexer: optional append-time hook that derives status and label updates from an appended event batchSnapshot: cached aggregate state plus the sequence it represents
timebox.Config controls store behavior regardless of backend:
TrimEvents: whether saving a snapshot trims older stored eventsSnapshotRatio: when anExecutorshould opportunistically refresh a snapshot while loading stateMaxRetries: optimistic concurrency retry limitCacheSize: executor projection cache sizeIndexer: optional function that derives status and label updates from an appended event batch
Create a store by opening backend persistence and then binding a store to it:
p, err := postgres.NewPersistence(postgres.Config{...})
store, err := p.NewStore(timebox.Config{...})You can also call timebox.NewStore(p, cfg) directly when you already have a backend value that satisfies timebox.Backend.
Snapshotting is available in two ways:
- explicit saves through
Executor.SaveSnapshot(id)orStore.PutSnapshot(id, value, sequence) - opportunistic executor saves while loading aggregates when no snapshot exists yet or when trailing event data grows past
SnapshotRatio
postgres.Config adds:
URL: connection URLPrefix: logical store namespaceMaxConns: pgx pool size cap
The Postgres backend stores:
- aggregate status and labels in
timebox_index - snapshots in
timebox_snapshot - events in
timebox_events
redis.Config adds:
Addr: Redis or Valkey host:portPassword: optional passwordPrefix: logical store namespaceShard: optional hash-tag value for cluster slot affinityDB: logical database index
raft.Config fields:
LocalID: stable local Raft node IDAddress: node address used for Raft trafficDataDir: durable local state directoryLogTailSize: hot retained WAL suffix cache size, default20480Servers: bootstrap voter setPublisher: optional callback for committed events after they are durably applied
Config.Indexer lets you derive indexed metadata from an appended event batch. Index currently supports:
Status: aggregate status plus the time it entered that statusLabels: current aggregate label values
Read paths exposed by the store:
Store.GetAggregateStatus(id)Store.ListAggregatesByStatus(status)Store.ListLabelValues(label)Store.ListAggregatesByLabel(label, value)
Archiving moves an aggregate's snapshot and event history into backend-specific archive storage and clears the live records. It is a one-way operation. The memory and redis backends support archiving, while postgres and raft do not.
Call Store.Archive(id).
To consume archived records, call Store.ConsumeArchive(ctx, handler). It blocks until one record is processed or the context is done. Use context.WithTimeout to poll with a deadline.
Handlers must be idempotent because processing is at-least-once.
examples/order.goshows a simple order lifecycle over Timebox
Work in progress. Not ready for production use.
