Releases: aimdb-dev/aimdb
v1.0.0
AimDB v1.0.0 Release Notes
Stable core API, real-time browser streaming, and LLM-driven architecture design!
Async in-memory database for MCU β edge β cloud data synchronization
π― Highlights
This release promotes aimdb-core to 1.0.0 β declaring a stable public API for the core database engine. It also ships four entirely new crates: a WebSocket connector for real-time browser streaming, a WASM adapter for running AimDB in the browser, a code generation library for architecture-to-code workflows, and a shared wire protocol for the WebSocket ecosystem. The MCP server gains a full architecture agent for LLM-driven system design, and the CLI adds code generation and live monitoring commands.
Headline: MCU β Edge β Cloud β Browser β AimDB now spans the full stack.
Extensible Streamable registry β the Streamable type dispatch system has been redesigned from a closed dispatcher to an open, extensible registry pattern. Users now register their own types via .register::<T>() on connector and adapter builders. Concrete contracts (Temperature, Humidity, GpsLocation) have moved to application-level crates.
β¨ Major Features
π WebSocket Connector (New Crate)
Real-time bidirectional streaming to browsers and between AimDB instances!
use aimdb_tokio_adapter::TokioAdapter;
use aimdb_websocket_connector::WebSocketConnector;
let db = AimDbBuilder::new()
.runtime(TokioAdapter::new())
.with_connector(
WebSocketConnector::new()
.bind("0.0.0.0:8080")
.path("/ws")
.with_late_join(true),
);
// Push records to browser clients
builder.configure::<Temperature>(AppKey::Temp, |reg| {
reg.buffer(BufferCfg::SingleLatest)
.link_to("ws://temperature"); // β streams to all subscribers
});Two modes:
- π₯οΈ Server mode (Axum-based) β accept incoming WebSocket connections via
link_to("ws://topic") - π Client mode (tokio-tungstenite) β connect out to remote servers via
link_to("ws-client://host/topic")for AimDB-to-AimDB sync without a broker - π Authentication via pluggable
AuthHandlertrait - π‘ MQTT-style wildcards β
#multi-level,*single-level topic matching - β±οΈ Late-join β new subscribers receive current values immediately
- π
StreamableRegistryβ extensible type-erased dispatch via.register::<T>()with schema-name collision detection
πΈοΈ WASM Adapter (New Crate)
Run AimDB in the browser β full dataflow engine via WebAssembly!
import { useRecord, useBridge } from 'aimdb-wasm-adapter/react';
function TemperatureDashboard() {
const bridge = useBridge("ws://localhost:8080/ws");
const temp = useRecord<Temperature>(bridge, "sensor.temperature");
return <div>Current: {temp?.celsius}Β°C</div>;
}Features:
- ποΈ Full
aimdb-executortrait implementations (RuntimeAdapter,Spawn,TimeOps,Logger) - πͺΆ
Rc<RefCell<β¦>>buffers β zero-overhead for single-threaded WASM - π―
WasmDbfacade via#[wasm_bindgen]:configureRecord,get,set,subscribe - π
WsBridgeβ WebSocket bridge connecting browser to remote AimDB server - βοΈ React hooks:
useRecord<T>,useSetRecord<T>,useBridge - π
SchemaRegistryfor type-erased record dispatch with extensible.register::<T>()API no_std+alloccompatible (wasm32-unknown-unknowntarget)
π Data Contracts β Pure Trait Crate
aimdb-data-contracts refocused as a pure trait-definition crate (version reset to 0.1.0).
Trait definitions for self-describing data schemas that work identically across MCU, edge, and cloud:
SchemaTypeβ unique identity and versioningStreamableβ capability marker for types crossing serialization boundariesLinkableβ wire format for connector transportSimulatableβ test data generationObservableβ signal extraction for monitoringMigratableβ schema evolution withMigrationChainandMigrationStep
Concrete contracts (Temperature, Humidity, GpsLocation) and the closed StreamableVisitor dispatcher have been removed β see Breaking Changes below.
ποΈ Code Generation (New Crate)
Architecture-to-code: generate Rust source and Mermaid diagrams from .aimdb/state.toml!
# Generate Mermaid architecture diagram
aimdb generate mermaid
# Generate Rust schema code
aimdb generate schema
# Scaffold a new common crate
aimdb generate common-crate
# Scaffold a hub binary crate
aimdb generate hub-crateaimdb-codegen features:
- π
ArchitectureStatetype for reading.aimdb/state.tomldecision records - π Mermaid diagram generation from architecture state
- π¦ Rust source generation β value structs, key enums,
SchemaType/Linkableimpls,configure_schema()functions - ποΈ Common crate, hub crate, and binary crate scaffolding
- β State validation module for architecture integrity checks
π€ Architecture Agent (MCP)
LLM-driven system design β propose, validate, and apply architecture changes through conversation!
The MCP server now includes a full architecture agent with a session state machine:
Idle β Gathering β Proposing β Resolve
16+ new MCP tools:
| Tool | Description |
|---|---|
propose_add_record |
Add a new record to the architecture |
propose_add_connector |
Add a connector (MQTT, WebSocket, etc.) |
propose_modify_buffer |
Change buffer type/capacity |
propose_modify_fields |
Modify record fields |
propose_modify_key_variants |
Change key variants |
remove_record |
Remove a record |
rename_record |
Rename a record |
reset_session |
Reset agent session state |
resolve_proposal |
Apply or reject pending proposals |
save_memory |
Persist architecture decisions |
validate_against_instance |
Validate architecture against running instance |
get_architecture |
Get current architecture state |
get_buffer_metrics |
Get buffer performance metrics |
Architecture MCP resources provide Mermaid diagrams and validation results as live resources.
π‘ Wire Protocol (New Crate)
Shared wire protocol types for the WebSocket ecosystem.
aimdb-ws-protocol provides ServerMessage and ClientMessage enums used by both the WebSocket connector (server side) and the WASM adapter (browser side). JSON-encoded with "type" discriminant tag.
π§ Other Changes
aimdb-core
- Added
ws://andwss://URL scheme support inConnectorUrlfor WebSocket connectors ConnectorUrl::default_port()now handles WebSocket schemesConnectorUrl::is_secure()includeswss://
aimdb-cli
- New
aimdb generatesubcommand for code generation viaaimdb-codegen - New
aimdb watchsubcommand for live record monitoring
Dependency Updates
- aimdb-knx-connector
0.3.1β Updated Embassy dependency versions (executor 0.10.0, time 0.5.1, sync 0.8.0, futures 0.1.2, net 0.9.0) - aimdb-mqtt-connector
0.5.1β Updated Embassy dependency versions (executor 0.10.0, time 0.5.1, sync 0.8.0, net 0.9.0) - Embassy submodules updated to latest upstream commits
π¦ Published Crates
New Crates
- π
[email protected]β real-time bidirectional WebSocket streaming - π
[email protected]β shared wire protocol types - π
[email protected]β WebAssembly runtime adapter with React hooks - π
[email protected]β architecture-to-code generation
Updated
| Crate | Previous | New |
|---|---|---|
aimdb-core |
0.5.0 | 1.0.0 |
aimdb-data-contracts |
0.5.0 | 0.1.0 (reset β pure trait crate) |
aimdb-cli |
0.5.0 | 0.6.0 |
aimdb-mcp |
0.5.0 | 0.6.0 |
aimdb-mqtt-connector |
0.5.0 | 0.5.1 |
aimdb-knx-connector |
0.3.0 | 0.3.1 |
Unchanged
[email protected][email protected][email protected][email protected][email protected][email protected][email protected][email protected]
β οΈ Breaking Changes
- aimdb-data-contracts: Concrete contracts removed from this crate. If you depended on
Temperature,Humidity, orGpsLocationfromaimdb-data-contracts, define them in your own application crate or shared common crate (seeexamples/weather-mesh-demo/weather-mesh-commonfor a reference). - aimdb-data-contracts:
for_each_streamable()andStreamableVisitorremoved. Use.register::<T>()on connector/adapter builders instead. - aimdb-data-contracts:
tsfeature removed (ts-rsdependency dropped). - aimdb-data-contracts: Version reset from 1.0.0 to 0.1.0 to reflect the reduced, stabilizing scope as a pure trait crate.
π₯οΈ New CLI Commands
# Code generation
aimdb generate mermaid # Generate Mermaid architecture diagram
aimdb generate schema # Generate Rust schema code
aimdb generate common-crate # Scaffold a common crate
aimdb generate hub-crate # Scaffold a hub binary crate
# Live monitoring
aimdb watch <record> # Watch live record updatesπ Migration Guide
Step 1: Update dependencies
[dependencies]
aimdb-core = "1.0"
aimdb-data-contracts = "0.1" # reset β now a pure trait crate
aimdb-tokio-adapter = "0.5" # unchanged
# Optional: add WebSocket streaming
aimdb-websocket-connector = "0.1"
# Optional: add WASM support
aimdb-wasm-adapter = "0.1"Step 2: Move concrete contracts to your crate
If you used Temperature, Humidity, or GpsLocation from aimdb-data-contracts, copy them into your own application or shared common crate. See examples/weather-mesh-demo/weather-mesh-common for a working example.
Step 3: Replace StreamableVisitor with .register::<T>()
// Before (closed dispatcher):
// for_each_streamable(visitor);
// After (extensible registry):
let connector = WebSocketConnector::new()
.register::<Temperature>()
.r...v0.5.0 - Transforms, Persistence, Graph Introspection & Dynamic Routing
AimDB v0.5.0 Release Notes
Transforms, persistence, graph introspection, and dynamic routing!
π― What's New in v0.5.0
This release introduces reactive data transformations, a pluggable persistence layer, a dependency graph introspection API, and dynamic topic/address routing for connectors. It also ships two new crates (aimdb-persistence, aimdb-persistence-sqlite).
β¨ Major Features
π Transform API (Design 020)
Reactive data transformations between records β computed values that update automatically!
use aimdb_core::{AimDbBuilder, RecordKey};
// Single-input transform: derive Fahrenheit from Celsius
builder.configure::<Celsius>(AppKey::TempCelsius, |reg| {
reg.buffer(BufferCfg::SingleLatest);
});
builder.configure::<Fahrenheit>(AppKey::TempFahrenheit, |reg| {
reg.transform_raw(AppKey::TempCelsius, |c: Celsius| {
Fahrenheit { value: c.value * 9.0 / 5.0 + 32.0 }
});
});
// Multi-input join: combine humidity + temperature into comfort index
builder.configure::<ComfortIndex>(AppKey::Comfort, |reg| {
reg.transform_join_raw(|join| {
join.input::<Celsius>(AppKey::TempCelsius)
.input::<Humidity>(AppKey::Humidity)
.on_trigger(|trigger, state| {
// Called whenever any input changes
Some(ComfortIndex::compute(state))
})
});
});Features:
- π Single-input
transform_raw()for simple derivations - π Multi-input
transform_join_raw()withJoinTriggerevent dispatch - π Mutual exclusion: a record cannot have both
.source()and.transform() - π Automatic spawning: transforms run as async tasks during
AimDb::build() - π Tracing integration: full lifecycle event logging
π Graph Introspection API (Design 021)
Visualize the dependency graph of your AimDB instance!
// Get full dependency graph
let nodes = db.graph_nodes();
let edges = db.graph_edges();
let order = db.graph_topo_order();
// Nodes show origin type and buffer config
for node in &nodes {
println!("{}: {:?} ({:?})", node.key, node.origin, node.buffer_info);
}
// Edges show data flow
for edge in &edges {
println!("{} β {} ({:?})", edge.from, edge.to, edge.edge_type);
}RecordOrigin variants:
Sourceβ direct producer writesLinkβ connector-bridged external dataTransformβ single-input reactive derivationTransformJoinβ multi-input reactive joinPassiveβ no producer (consumer-only)
Also available via aimdb-client and aimdb-mcp tools.
πΎ Persistence Layer (New Crates)
Long-term record history with pluggable backends!
use aimdb_persistence::AimDbBuilderPersistExt;
use aimdb_persistence_sqlite::SqliteBackend;
// Set up SQLite persistence with 7-day retention
let backend = SqliteBackend::new("./aimdb_history.db")?;
builder.with_persistence(backend, Duration::from_secs(7 * 24 * 3600));
// Mark records to persist
builder.configure::<Temperature>(AppKey::Temp, |reg| {
reg.buffer(BufferCfg::SingleLatest)
.persist("sensor.temperature"); // β stored to SQLite
});
let db = builder.build().await?;
// Query historical data
let last_100 = db.query_latest::<Temperature>("sensor.*", Some(100)).await;
let this_week = db.query_range::<Temperature>(
"sensor.*",
week_start_ms,
week_end_ms,
None, // no per-record limit
).await;aimdb-persistence features:
PersistenceBackendtrait β implement your own storage- Automatic retention cleanup task (24-hour interval)
query_latest,query_range,query_rawAPIs
aimdb-persistence-sqlite features:
- WAL journal mode for concurrent reads
- Window-function queries for efficient top-N per record
*wildcard pattern matching- Actor-model writer thread;
Clone= O(1) handle copy
π Dynamic Topic/Address Routing (Design 018)
Resolve MQTT topics or KNX group addresses at runtime based on data values!
// Outbound: per-message topic from payload
builder.configure::<SensorReading>(AppKey::Reading, |reg| {
reg.link_to("mqtt://broker/sensors/default")
.with_topic_provider(|reading: &SensorReading| {
format!("sensors/{}/data", reading.sensor_id)
});
});
// Inbound: late-binding topic from config/discovery
builder.configure::<Command>(AppKey::Command, |reg| {
reg.link_from("mqtt://broker/")
.with_topic_resolver(|| {
// Called once at connector startup
format!("commands/{}", load_device_id())
});
});Works in both std and no_std + alloc environments.
π‘ Record Drain API (Design 019)
Non-blocking batch pull for accumulated history β perfect for LLM analysis!
// Via AimDbClient (remote access)
let response = client.drain_record("sensor.temperature").await?;
println!("Drained {} values", response.count);
// First call is always empty (cold start β creates the reader)
// Subsequent calls return everything since last drainCold-start semantics: the first drain call creates a reader and returns empty. Subsequent calls return all values accumulated since the previous drain. This enables stateful batch analysis without missing data.
π₯ Breaking Changes
1. .with_serialization() β .with_remote_access()
// Before (v0.4.x)
reg.buffer(...).with_serialization();
// After (v0.5.0)
reg.buffer(...).with_remote_access();2. RecordId::new() requires RecordOrigin
This affects custom buffer/record implementations only:
// Before (v0.4.x)
RecordId::new(type_id, idx)
// After (v0.5.0)
RecordId::new(type_id, idx, RecordOrigin::Source)3. MCP subscription tools removed
The subscribe_record, unsubscribe_record, list_subscriptions, and get_notification_directory MCP tools have been replaced by drain_record. Update any LLM prompts or MCP tool configurations accordingly.
π¦ Published Crates
New Crates
- π
[email protected]β pluggable persistence layer - π
[email protected]β SQLite backend for persistence
Updated
| Crate | Version |
|---|---|
aimdb-core |
0.5.0 |
aimdb-tokio-adapter |
0.5.0 |
aimdb-embassy-adapter |
0.5.0 |
aimdb-client |
0.5.0 |
aimdb-sync |
0.5.0 |
aimdb-mqtt-connector |
0.5.0 |
aimdb-knx-connector |
0.3.0 |
aimdb-cli |
0.5.0 |
aimdb-mcp |
0.5.0 |
Unchanged
π§ New MCP Tools
The aimdb-mcp server now provides these tools:
| Tool | Description |
|---|---|
discover_instances |
Find running AimDB servers |
list_records |
List all records with metadata |
get_record |
Get current value |
set_record |
Set writable record value |
query_schema |
Infer JSON Schema from value |
get_instance_info |
Server version and capabilities |
drain_record |
Batch pull accumulated history |
graph_nodes |
All graph nodes with origin/buffer info |
graph_edges |
Directed data-flow edges |
graph_topo_order |
Topological record ordering |
π₯οΈ New CLI Commands (aimdb graph)
# List all graph nodes (color-coded by origin)
aimdb graph nodes
# Show directed data-flow edges
aimdb graph edges
# Show topological (spawn) order
aimdb graph order
# Export to Graphviz DOT format
aimdb graph dot > pipeline.dot
dot -Tsvg pipeline.dot -o pipeline.svgπ Migration Guide
Step 1: Update dependencies
[dependencies]
aimdb-core = "0.5.0"
aimdb-tokio-adapter = "0.5.0"
# Optional: add persistence
aimdb-persistence = "0.1.0"
aimdb-persistence-sqlite = "0.1.0"Step 2: Rename .with_serialization() β .with_remote_access()
# Quick find & replace
grep -rn "with_serialization" src/
# Replace with:
# .with_remote_access()Step 3: Update RecordId::new() calls (custom buffer implementations only)
// Add RecordOrigin argument
RecordId::new(type_id, idx, RecordOrigin::Source)Step 4: Update MCP tool usage
Replace subscribe_record / unsubscribe_record workflows with drain_record polling:
# Old: subscribe β wait β unsubscribe
# New: call drain_record periodically (first call always empty)
drain_record(socket_path: "...", record_name: "sensor.*")
π Examples
All examples updated for v0.5.0:
git clone https://github.com/aimdb-dev/aimdb.git && cd aimdb
# MQTT connector demo
cargo run -p tokio-mqtt-connector-demo
# KNX connector demo
cargo run -p tokio-knx-connector-demo
# Sync API demo
cargo run -p sync-api-demo
# Weather mesh demo (multi-station)
cargo run -p weather-hub
# Embedded (cross-compile)
cd examples/embassy-mqtt-connector-demo
cargo build --release --target thumbv7em-none-eabihfπ€ Contributing
git clone https://github.com/aimdb-dev/aimdb.git
cd aimdb
make check # fmt + clippy + test + embedded cross-compileπ License
Apache License 2.0 β see LICENSE.
v0.4.0
AimDB v0.4.0 Release Notes
Compile-time safe record keys with #[derive(RecordKey)] macro!
π― What's New in v0.4.0
This release introduces compile-time safe record keys via a new derive macro, transforming RecordKey from a struct to a trait. This enables user-defined enum keys with automatic string representation, eliminating runtime key typos and improving embedded system efficiency.
β¨ Major Features
π RecordKey Trait + Derive Macro
New crate aimdb-derive provides #[derive(RecordKey)] for compile-time checked keys!
Instead of error-prone string literals, define type-safe enum keys:
use aimdb_core::RecordKey;
#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
#[key_prefix = "sensor"]
pub enum SensorKey {
#[key = "temp.indoor"]
TempIndoor,
#[key = "temp.outdoor"]
TempOutdoor,
#[key = "humidity"]
#[link_address = "zigbee/sensors/humidity"] // MQTT topic
Humidity,
}
// Compile-time typo detection!
let producer = db.producer::<Temperature>(SensorKey::TempIndoor)?;
// vs runtime error with string: db.producer::<Temperature>("sensor.temp.indor")?;Benefits:
- π‘οΈ Compile-time safety: Typos caught at build time
- π Zero-allocation: Enum variants are
Copy, no heap allocation - π Connector metadata:
#[link_address = "..."]for MQTT topics, KNX addresses - π§ IDE support: Autocomplete and refactoring work correctly
- π¦ no_std compatible: Works on embedded targets
π StringKey Type
The previous RecordKey struct is now StringKey with improved memory model:
use aimdb_core::StringKey;
// Static keys (zero allocation)
let key: StringKey = "sensors.temp".into();
// Dynamic keys (interned via Box::leak for O(1) Copy/Clone)
let key = StringKey::intern(dynamic_string);Memory Model:
Static(&'static str)- Zero-allocation for string literalsInterned(&'static str)- UsesBox::leakfor O(1) cloning- Designed for startup-time registration (<1000 keys)
- Debug warning if >1000 interned keys
π MQTT Connector Fix
Fixed initialization deadlock when subscribing to >10 MQTT topics (Issue #63)
The fix implements:
- Spawn-before-subscribe: Event loop spawned before topic subscriptions
- Dynamic channel capacity: Scales with topic count (
topics + 10) - Proper task yielding: Ensures scheduler runs between operations
Also upgraded rumqttc from 0.24 to 0.25.
π₯ Breaking Changes
1. RecordKey: Struct β Trait
RecordKey is now a trait, not a struct.
// Before (v0.3.x)
use aimdb_core::RecordKey;
let key: RecordKey = "sensors.temp".into();
// After (v0.4.0)
use aimdb_core::StringKey;
let key: StringKey = "sensors.temp".into();
// Or use derive macro (recommended)
use aimdb_core::RecordKey; // Now a trait
#[derive(RecordKey, Clone, Copy, PartialEq, Eq)]
pub enum AppKey {
#[key = "sensors.temp"]
SensorsTemp,
}2. RecordKey Trait Bounds
If you have generic code over record keys:
// Before (v0.3.x)
fn process_key(key: RecordKey) { ... }
// After (v0.4.0)
fn process_key<K: RecordKey>(key: K) { ... }
// Or with StringKey specifically:
fn process_key(key: StringKey) { ... }π¦ Published Crates
New Crate
- π
[email protected]-#[derive(RecordKey)]macro for compile-time checked keys
Updated to v0.4.0
- β
[email protected]- RecordKey trait, StringKey type, derive feature - β
[email protected]- Deadlock fix + rumqttc 0.25 - β
[email protected]- Updated for aimdb-core 0.4.0 - β
[email protected]- Updated for aimdb-core 0.4.0 - β
[email protected]- Updated for aimdb-core 0.4.0 - β
[email protected]- Updated for aimdb-core 0.4.0 - β
[email protected]- Updated for aimdb-client 0.4.0 - β
[email protected]- Updated for aimdb-client 0.4.0
Unchanged
[email protected]- No changes (still compatible)[email protected]- No changes (still compatible)
π Quick Start
Using Derive Macro (Recommended)
use aimdb_core::{AimDbBuilder, RecordKey, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;
// Define type-safe keys
#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
pub enum AppKey {
#[key = "sensors.temperature"]
#[link_address = "home/sensors/temp"] // MQTT topic
Temperature,
#[key = "sensors.humidity"]
#[link_address = "home/sensors/humidity"]
Humidity,
#[key = "config.settings"]
Settings,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
struct SensorReading {
value: f32,
timestamp: u64,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let runtime = Arc::new(TokioAdapter::new()?);
let mut builder = AimDbBuilder::new().runtime(runtime);
// Register with type-safe keys
builder.configure::<SensorReading>(AppKey::Temperature, |reg| {
reg.buffer(BufferCfg::SingleLatest);
});
builder.configure::<SensorReading>(AppKey::Humidity, |reg| {
reg.buffer(BufferCfg::SingleLatest);
});
let db = builder.build().await?;
// Type-safe producer access
let temp_producer = db.producer::<SensorReading>(AppKey::Temperature)?;
let humidity_producer = db.producer::<SensorReading>(AppKey::Humidity)?;
// Produce data
temp_producer.produce(SensorReading { value: 22.5, timestamp: 1000 }).await?;
humidity_producer.produce(SensorReading { value: 65.0, timestamp: 1001 }).await?;
// Access link address for connector metadata
println!("Temperature MQTT topic: {:?}", AppKey::Temperature.link_address());
// Prints: Some("home/sensors/temp")
Ok(())
}Using StringKey (Dynamic Keys)
use aimdb_core::{AimDbBuilder, StringKey, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let runtime = Arc::new(TokioAdapter::new()?);
let mut builder = AimDbBuilder::new().runtime(runtime);
// Static string literals (zero allocation)
builder.configure::<String>("app.logs", |reg| {
reg.buffer(BufferCfg::SpmcRing { capacity: 1000 });
});
// Dynamic keys (for runtime-determined keys)
let tenant_id = "tenant-123";
let dynamic_key = StringKey::intern(format!("tenants.{}.data", tenant_id));
builder.configure::<String>(dynamic_key, |reg| {
reg.buffer(BufferCfg::Mailbox);
});
let db = builder.build().await?;
Ok(())
}π Migration Guide
Step 1: Update Dependencies
[dependencies]
aimdb-core = "0.4.0"
aimdb-tokio-adapter = "0.4.0"
# aimdb-mqtt-connector = "0.4.0" # If using MQTT
# Enable derive macro (included by default)
# aimdb-core = { version = "0.4.0", features = ["derive"] }Step 2: Choose Key Strategy
Option A: Keep String Literals (Minimal Change)
// Before (v0.3.x)
builder.configure::<Temperature>("sensors.temp", |reg| { ... });
// After (v0.4.0) - No change needed! &'static str implements RecordKey
builder.configure::<Temperature>("sensors.temp", |reg| { ... });Option B: Adopt Derive Macro (Recommended)
// Define keys once
#[derive(RecordKey, Clone, Copy, PartialEq, Eq)]
pub enum Keys {
#[key = "sensors.temp"]
SensorsTemp,
}
// Use everywhere
builder.configure::<Temperature>(Keys::SensorsTemp, |reg| { ... });
let producer = db.producer::<Temperature>(Keys::SensorsTemp)?;Step 3: Update RecordKey Imports
// Before (v0.3.x)
use aimdb_core::RecordKey;
let key: RecordKey = "name".into();
// After (v0.4.0)
use aimdb_core::StringKey;
let key: StringKey = "name".into();Step 4: Update Generic Code
// Before (v0.3.x)
fn process(key: RecordKey) { ... }
// After (v0.4.0)
fn process<K: RecordKey>(key: K) { ... }
// Or specifically:
fn process(key: impl RecordKey) { ... }
fn process(key: StringKey) { ... }π― Derive Macro Reference
Attributes
| Attribute | Level | Required | Description |
|---|---|---|---|
#[key = "..."] |
Variant | Yes | String representation for the key |
#[key_prefix = "..."] |
Enum | No | Prefix prepended to all variant keys |
#[link_address = "..."] |
Variant | No | Connector metadata (MQTT topic, KNX address) |
Example with All Features
use aimdb_core::RecordKey;
#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
#[key_prefix = "home.automation"] // Applied to all variants
pub enum HomeKey {
#[key = "lights.living"]
#[link_address = "1/1/1"] // KNX group address
LightsLiving,
#[key = "lights.bedroom"]
#[link_address = "1/1/2"]
LightsBedroom,
#[key = "thermostat"]
#[link_address = "mqtt://home/thermostat"]
Thermostat,
}
// Generated methods:
// HomeKey::LightsLiving.as_str() -> "home.automation.lights.living"
// HomeKey::LightsLiving.link_address() -> Some("1/1/1")Compile-Time Validation
The macro validates at compile time:
- β
All variants have
#[key = "..."]attribute - β No duplicate keys (including after prefix)
- β Only unit variants (no tuple/struct variants)
// Compile error: duplicate key
#[derive(RecordKey)]
pub enum BadKeys {
#[key = "same"]
First,
#[key = "same"] // Error: duplicate key "same"
Second,
}
// Compile error: missing key attribute
#[derive(RecordKey)]
pub enum BadKeys {
#[key = "valid"]
First,
Second, // Error: missing #[key = "..."] attribute
}π Bug Fixes
MQTT Connector Deadlock (Issue #63)
Problem: When subscribing to more than 10 MQTT topics, the connector would deadlock during initialization.
Root Cause: The internal channel had a fixed capacity of 10, and subscriptions were made before spawning the event loop, causing the channel to fill up and block.
**Solu...
v0.3.0
AimDB v0.3.0 Release Notes
Major update with RecordId/RecordKey architecture and buffer metrics!
π― What's New in v0.3.0
This release introduces a complete architectural overhaul of AimDB's internal record storage system, enabling multi-instance records (same type, different keys), stable O(1) indexing, and comprehensive buffer metrics. The new RecordId/RecordKey system provides both the performance benefits of numeric indexing and the usability of human-readable keys.
β¨ Major Features
π RecordId + RecordKey Architecture
Complete rewrite of internal storage for stable record identification!
AimDB now supports multiple records of the same type with unique keys, enabling patterns like:
- Multiple sensors of the same type (
Temperature) with different keys ("sensors.indoor","sensors.outdoor") - Multi-tenant configurations (
"tenant.a.config","tenant.b.config") - Regional data streams (
"region.us.metrics","region.eu.metrics")
Key Components:
RecordId: u32 index wrapper for O(1) Vec-based hot-path accessRecordKey: Hybrid&'static str/Arc<str>with zero-alloc static keys and flexible dynamic keys- O(1) key resolution via
HashMap<RecordKey, RecordId> - Type introspection via
HashMap<TypeId, Vec<RecordId>>
New API:
use aimdb_core::{AimDbBuilder, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;
#[derive(Clone, Debug, Serialize, Deserialize)]
struct Temperature {
celsius: f32,
sensor_id: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let runtime = Arc::new(TokioAdapter::new()?);
let mut builder = AimDbBuilder::new().runtime(runtime);
// Register MULTIPLE records of the same type with different keys
builder.configure::<Temperature>("sensors.indoor", |reg| {
reg.buffer(BufferCfg::SingleLatest);
});
builder.configure::<Temperature>("sensors.outdoor", |reg| {
reg.buffer(BufferCfg::SingleLatest);
});
let db = builder.build().await?;
// Key-based access for multi-instance records
let indoor_producer = db.producer_by_key::<Temperature>("sensors.indoor")?;
let outdoor_producer = db.producer_by_key::<Temperature>("sensors.outdoor")?;
indoor_producer.produce(Temperature { celsius: 22.5, sensor_id: "indoor-1".into() }).await?;
outdoor_producer.produce(Temperature { celsius: 15.2, sensor_id: "outdoor-1".into() }).await?;
// Introspection
let temp_ids = db.records_of_type::<Temperature>(); // Returns &[RecordId] with 2 IDs
let id = db.resolve_key("sensors.indoor"); // Returns Option<RecordId>
Ok(())
}Key Naming Conventions:
- Use dot-separated hierarchical names:
"sensors.indoor","config.app" - Keys must be unique across all records (duplicate keys panic at registration)
- Static string literals (
"key") are zero-allocation via&'static str - Dynamic keys (
String::from("key")) useArc<str>for efficient cloning
π Buffer Metrics API (Feature-Gated)
Comprehensive buffer introspection for monitoring and debugging!
Enable buffer metrics with the metrics feature flag:
[dependencies]
aimdb-core = { version = "0.3.0", features = ["metrics"] }
aimdb-tokio-adapter = { version = "0.3.0", features = ["metrics"] }New Metrics:
use aimdb_core::buffer::BufferMetricsSnapshot;
// Get metrics from any record
let metadata = db.record_metadata::<Temperature>("sensors.indoor")?;
if let Some(metrics) = metadata.buffer_metrics {
println!("Produced: {}", metrics.produced_count);
println!("Consumed: {}", metrics.consumed_count);
println!("Dropped: {}", metrics.dropped_count);
println!("Occupancy: {}/{}", metrics.occupancy.0, metrics.occupancy.1);
}Available Metrics:
produced_count: Total items pushed to the bufferconsumed_count: Total items consumed across all readersdropped_count: Total items dropped due to lag (per-reader semantics documented)occupancy: Current buffer fill level as(current, capacity)tuple
Supported Buffers:
- β SPMC Ring Buffer: Full metrics support
- β SingleLatest: Full metrics support
- β Mailbox: Full metrics support
Tokio Adapter: Full implementation with atomic counters
Embassy Adapter: Feature flag present (API consistency), but metrics not functional on embedded targets (requires std)
π Enhanced Introspection API
New methods for exploring records at runtime:
// Find all records of a specific type
let temperature_records = db.records_of_type::<Temperature>();
for record_id in temperature_records {
let metadata = db.record_metadata_by_id(*record_id)?;
println!("Temperature record: {} (key: {})", record_id.0, metadata.record_key);
}
// Resolve key to RecordId
if let Some(record_id) = db.resolve_key("sensors.indoor") {
println!("Found record with ID: {}", record_id.0);
}
// Key-bound producers/consumers
let producer = db.producer_by_key::<Temperature>("sensors.indoor")?;
let consumer = db.consumer_by_key::<Temperature>("sensors.outdoor")?;
println!("Producer key: {}", producer.key());
println!("Consumer key: {}", consumer.key());ποΈ Internal Architecture Improvements
Optimized storage for sub-50ms latency:
Before (v0.2.0):
BTreeMap<TypeId, Box<dyn AnyRecord>> // O(log n) lookups
After (v0.3.0):
Vec<Box<dyn AnyRecord>> // O(1) hot-path access by RecordId
HashMap<RecordKey, RecordId> // O(1) name lookups
HashMap<TypeId, Vec<RecordId>> // O(1) type introspection
Performance Benefits:
- β O(1) hot-path access (was O(log n))
- β Stable RecordId across application lifetime
- β Zero-allocation static keys
- β Efficient multi-instance type lookups
π¨ Breaking Changes
1. Record Registration API
All records now require a key parameter:
// Before (v0.2.x)
builder.configure::<Temperature>(|reg| {
reg.buffer(BufferCfg::SingleLatest);
});
// After (v0.3.0)
builder.configure::<Temperature>("sensor.temperature", |reg| {
reg.buffer(BufferCfg::SingleLatest);
});2. Type-Based Lookup Ambiguity
If you register multiple records of the same type, type-based methods return AmbiguousType error:
// With multiple Temperature records registered...
db.produce(temp).await // β Returns Err(AmbiguousType { count: 2, ... })
// Use key-based methods instead:
db.produce_by_key("sensors.indoor", temp).await // β
Works correctlyMigration Strategy:
- Single-instance records: Type-based API still works (
produce(),subscribe(), etc.) - Multi-instance records: Use key-based API (
produce_by_key(),subscribe_by_key(), etc.)
3. DynBuffer Implementation
Custom buffer implementations must now explicitly implement DynBuffer<T>:
// Before (v0.2.x) - automatic via blanket impl
impl<T: Clone + Send> Buffer<T> for MyBuffer<T> { ... }
// DynBuffer was automatically implemented
// After (v0.3.0) - explicit implementation required
impl<T: Clone + Send> Buffer<T> for MyBuffer<T> { ... }
impl<T: Clone + Send + 'static> DynBuffer<T> for MyBuffer<T> {
fn push(&self, value: T) {
<Self as Buffer<T>>::push(self, value)
}
fn subscribe_boxed(&self) -> Box<dyn BufferReader<T> + Send> {
Box::new(self.subscribe())
}
fn as_any(&self) -> &dyn core::any::Any {
self
}
// Optional: implement metrics_snapshot() if you support metrics
#[cfg(feature = "metrics")]
fn metrics_snapshot(&self) -> Option<BufferMetricsSnapshot> {
None // or Some(...) if you track metrics
}
}Why this change? Enables adapters to provide metrics_snapshot() when the metrics feature is enabled.
4. RecordMetadata Changes
New fields added to RecordMetadata:
pub struct RecordMetadata {
pub record_id: u32, // β NEW: Stable numeric identifier
pub record_key: String, // β NEW: Human-readable key
pub type_id: u64,
pub type_name: String,
pub buffer_type: String,
pub buffer_capacity: Option<usize>,
pub producer_count: usize,
pub consumer_count: usize,
pub outbound_connector_count: usize,
pub inbound_connector_count: usize,
#[cfg(feature = "metrics")]
pub buffer_metrics: Option<BufferMetricsSnapshot>, // β NEW: Buffer metrics
}π¦ Published Crates
Updated to v0.3.0
- β
[email protected]- RecordId/RecordKey architecture + buffer metrics - β
[email protected]- Buffer metrics implementation + multi-instance tests - β
[email protected]- Explicit DynBuffer implementation + metrics feature flag - β
[email protected]- Updated for new RecordMetadata fields - β
[email protected]- Updated for key-based registration API - β
[email protected]- Updated for key-based registration + rumqttc 0.25 - β
[email protected]- Updated formatters for RecordId/RecordKey display - β
[email protected]- Updated tools for RecordId/RecordKey introspection
Updated to v0.2.0
- β
[email protected]- Updated for key-based registration (first stable release)
Unchanged
[email protected]- No changes (still compatible)
π Quick Start
Multi-Instance Records Example
use aimdb_core::{AimDbBuilder, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;
#[derive(Clone, Debug, Serialize, Deserialize)]
struct SensorReading {...v0.2.0
Summary
This release introduces bidirectional connector support, enabling true two-way data synchronization between AimDB and external systems. The new architecture supports simultaneous publishing and subscribing with automatic message routing, working seamlessly across both Tokio (std) and Embassy (no_std) runtimes.
Highlights
- π Bidirectional Connectors: New
.link_to()and.link_from()APIs for clear directional data flows - π― Type-Erased Router: Automatic routing of incoming messages to correct typed producers
- ποΈ ConnectorBuilder Pattern: Simplified connector registration with automatic initialization
- π‘ Enhanced MQTT Connector: Complete rewrite supporting simultaneous pub/sub with automatic reconnection
- π Embassy Network Integration: Connectors can now access Embassy's network stack for network operations
- π Comprehensive Guide: New 1000+ line connector development guide with real-world examples
Breaking Changes
- aimdb-core:
.build()is now async; connector registration changed from.with_connector(scheme, instance)to.with_connector(builder) - aimdb-core:
.link()deprecated in favor of.link_to()(outbound) and.link_from()(inbound) - aimdb-core: Outbound connector architecture refactored to trait-based system:
- Removed:
TypedRecord::spawn_outbound_consumers()method (was called automatically) - Added:
ConsumerTrait,AnyReadertraits for type-erased outbound routing - Added:
AimDb::collect_outbound_routes()method to gather configured routes - Required: Connectors must implement
spawn_outbound_publishers()and call it inConnectorBuilder::build()
- Removed:
- aimdb-mqtt-connector: API changed from
MqttConnector::new()toMqttConnectorBuilder::new(); automatic task spawning removes need for manual background task management - aimdb-mqtt-connector: Added
spawn_outbound_publishers()method; must be called inbuild()for outbound publishing to work
Modified Crates
See individual changelogs for detailed changes:
- aimdb-core: Core connector architecture, router system, bidirectional APIs
- aimdb-tokio-adapter: Connector builder integration
- aimdb-embassy-adapter: Network stack access, connector support
- aimdb-mqtt-connector: Complete bidirectional rewrite for both runtimes
- aimdb-sync: Compatibility with async build
Migration Guide
1. Update connector registration:
// Old (v0.1.0)
let mqtt = MqttConnector::new("mqtt://broker:1883").await?;
builder.with_connector("mqtt", Arc::new(mqtt))
// New (v0.2.0)
builder.with_connector(MqttConnectorBuilder::new("mqtt://broker:1883"))2. Make build() async:
// Old (v0.1.0)
let db = builder.build()?;
// New (v0.2.0)
let db = builder.build().await?;3. Use directional link methods:
// Old (v0.1.0)
.link("mqtt://sensors/temp")
// New (v0.2.0)
.link_to("mqtt://sensors/temp") // For publishing (AimDB β External)
.link_from("mqtt://commands/temp") // For subscribing (External β AimDB)4. Remove manual MQTT task spawning (Embassy):
// Old (v0.1.0) - Manual task spawning required
let mqtt_result = MqttConnector::create(...).await?;
spawner.spawn(mqtt_task(mqtt_result.task))?;
builder.with_connector("mqtt", Arc::new(mqtt_result.connector))
// New (v0.2.0) - Automatic task spawning
builder.with_connector(MqttConnectorBuilder::new("mqtt://broker:1883"))
// Tasks spawn automatically during build()5. Update custom connectors to spawn outbound publishers:
If you've implemented a custom connector, you must add spawn_outbound_publishers() support:
// Old (v0.1.0) - Outbound consumers spawned automatically
impl ConnectorBuilder for MyConnectorBuilder {
fn build<R>(&self, db: &AimDb<R>) -> DbResult<Arc<dyn Connector>> {
// ... setup code ...
Ok(Arc::new(MyConnector { /* fields */ }))
}
// Outbound publishing happened automatically via TypedRecord::spawn_outbound_consumers()
}
// New (v0.2.0) - Must explicitly spawn outbound publishers
impl ConnectorBuilder for MyConnectorBuilder {
fn build<R>(&self, db: &AimDb<R>) -> DbResult<Arc<dyn Connector>> {
// ... setup code ...
let connector = MyConnector { /* fields */ };
// REQUIRED: Collect and spawn outbound publishers
let outbound_routes = db.collect_outbound_routes(self.protocol_name());
connector.spawn_outbound_publishers(db, outbound_routes)?;
Ok(Arc::new(connector))
}
}
// REQUIRED: Implement spawn_outbound_publishers method
impl MyConnector {
fn spawn_outbound_publishers<R: RuntimeAdapter + 'static>(
&self,
db: &AimDb<R>,
routes: Vec<(String, Box<dyn ConsumerTrait>, SerializerFn, Vec<(String, String)>)>,
) -> DbResult<()> {
for (topic, consumer, serializer, _config) in routes {
let client = self.client.clone();
let topic_clone = topic.clone();
db.runtime().spawn(async move {
// Subscribe to record updates using ConsumerTrait
match consumer.subscribe_any().await {
Ok(mut reader) => {
loop {
match reader.recv_any().await {
Ok(value) => {
// Serialize and publish
if let Ok(bytes) = serializer(&*value) {
let _ = client.publish(&topic_clone, bytes).await;
}
}
Err(_) => break,
}
}
}
Err(_) => { /* Log error */ }
}
})?;
}
Ok(())
}
}Why this change? The new trait-based architecture provides:
- β
Symmetry with inbound routing (
ProducerTraitβConsumerTrait) - β
Testability (can mock
ConsumerTraitwithout real records) - β Type safety via factory pattern (type capture at configuration time)
- β Maintainability (connector logic stays in connector crate)
Migration checklist for custom connectors:
- Add
spawn_outbound_publishers()method to connector implementation - Call
db.collect_outbound_routes(protocol_name)inConnectorBuilder::build() - Call
connector.spawn_outbound_publishers(db, routes)?before returning - Use
ConsumerTrait::subscribe_any()to get type-erased readers - Handle serialization with provided
SerializerFn - Test both inbound (
.link_from()) and outbound (.link_to()) data flows
See Connector Development Guide for complete examples.
Documentation
- Added comprehensive Connector Development Guide
- Updated MQTT connector examples for both Tokio and Embassy
- Enhanced API documentation with bidirectional patterns
v0.1.0
Added
Core Database (aimdb-core)
- Initial release of AimDB async in-memory database engine
- Type-safe record system using
TypeId-based routing - Three buffer types for different data flow patterns:
- SPMC Ring Buffer: High-frequency data streams with bounded memory
- SingleLatest: State synchronization and configuration updates
- Mailbox: Commands and one-shot events
- Producer-consumer model with async task spawning
- Runtime adapter abstraction for cross-platform support
no_stdcompatibility for embedded targets- Error handling with comprehensive
DbResult<T>andDbErrortypes - Remote access protocol (AimX v1) for cross-process introspection
Runtime Adapters
- Tokio Adapter (
aimdb-tokio-adapter): Full-featured std runtime support- Lock-free buffer implementations
- Configurable buffer capacities
- Comprehensive async task spawning
- Embassy Adapter (
aimdb-embassy-adapter): Embeddedno_stdruntime support- Configurable task pool sizes (8/16/32 concurrent tasks)
- Optimized for resource-constrained devices
- Compatible with ARM Cortex-M targets
MQTT Connector (aimdb-mqtt-connector)
- Dual runtime support for both Tokio and Embassy
- Automatic consumer registration via builder pattern
- Topic mapping with QoS and retain configuration
- Pluggable serializers (JSON, MessagePack, Postcard, custom)
- Automatic reconnection handling
- Uses
rumqttcfor std environments - Uses
mountain-mqttfor embedded environments
Developer Tools
- MCP Server (
aimdb-mcp): LLM-powered introspection and debugging- Discover running AimDB instances
- Query record values and schemas
- Subscribe to real-time updates
- Set writable record values
- JSON Schema inference from record values
- Notification persistence to JSONL files
- CLI Tool (
aimdb-cli): Command-line interface (skeleton)- Instance discovery and management commands
- Record inspection capabilities
- Real-time watch functionality
- Client Library (
aimdb-client): Reusable connection and discovery logic- Unix domain socket communication
- AimX v1 protocol implementation
- Clean error handling
Synchronous API (aimdb-sync)
- Blocking wrapper around async AimDB core
- Thread-safe synchronous record access
- Automatic Tokio runtime management
- Ideal for gradual migration from sync to async
Documentation & Examples
- Comprehensive README with architecture overview
- Individual crate documentation with examples
- 12 detailed design documents in
/docs/design - Working examples:
tokio-mqtt-connector-demo: Full Tokio MQTT integrationembassy-mqtt-connector-demo: Embedded RP2040 with WiFi MQTTsync-api-demo: Synchronous API usage patternsremote-access-demo: Cross-process introspection server
Build & CI Infrastructure
- Comprehensive Makefile with color-coded output
- GitHub Actions workflows:
- Continuous integration (format, lint, test)
- Security audits (weekly schedule + on-demand)
- Documentation generation
- Release automation
- Cross-compilation testing for
thumbv7em-none-eabihftarget cargo-denyconfiguration for license and dependency auditing- Dev container setup for consistent development environment
Design Goals Achieved
- β Sub-50ms latency for real-time synchronization
- β Lock-free buffer operations
- β Cross-platform support (MCU β edge β cloud)
- β Type safety with zero-cost abstractions
- β Protocol-agnostic connector architecture
Known Limitations
- Kafka and DDS connectors planned for future releases
- CLI tool is currently skeleton implementation
- Performance benchmarks not yet included
- Limited to Unix domain sockets for remote access (no TCP yet)
Dependencies
- Rust 1.75+ required
- Tokio 1.47+ for std environments
- Embassy 0.9+ for embedded environments
- See
deny.tomlfor approved dependency licenses
Breaking Changes
None (initial release)
Migration Guide
Not applicable (initial release)
Release Notes
v0.1.0 - "Foundation Release"
AimDB v0.1.0 establishes the foundational architecture for async, in-memory data synchronization across MCU β edge β cloud environments. This release focuses on core functionality, dual runtime support, and developer tooling.
Highlights:
- π Dual runtime support: Works on both standard library (Tokio) and embedded (Embassy)
- π Type-safe record system eliminates runtime string lookups
- π¦ Three buffer types cover most real-time data patterns
- π MQTT connector works in both std and
no_stdenvironments - π€ MCP server enables LLM-powered introspection
- β 27+ core tests, comprehensive CI/CD, security auditing
Get Started:
cargo add aimdb-core aimdb-tokio-adapterSee README.md for quickstart guide and examples.
Feedback Welcome:
This is an early release. Please report issues, suggest features, or contribute at:
https://github.com/aimdb-dev/aimdb