Skip to content

Releases: aimdb-dev/aimdb

v1.0.0

16 Mar 21:49

Choose a tag to compare

AimDB v1.0.0 Release Notes

Stable core API, real-time browser streaming, and LLM-driven architecture design!

Async in-memory database for MCU β†’ edge β†’ cloud data synchronization


🎯 Highlights

This release promotes aimdb-core to 1.0.0 β€” declaring a stable public API for the core database engine. It also ships four entirely new crates: a WebSocket connector for real-time browser streaming, a WASM adapter for running AimDB in the browser, a code generation library for architecture-to-code workflows, and a shared wire protocol for the WebSocket ecosystem. The MCP server gains a full architecture agent for LLM-driven system design, and the CLI adds code generation and live monitoring commands.

Headline: MCU β†’ Edge β†’ Cloud β†’ Browser β€” AimDB now spans the full stack.

Extensible Streamable registry β€” the Streamable type dispatch system has been redesigned from a closed dispatcher to an open, extensible registry pattern. Users now register their own types via .register::<T>() on connector and adapter builders. Concrete contracts (Temperature, Humidity, GpsLocation) have moved to application-level crates.


✨ Major Features

🌐 WebSocket Connector (New Crate)

Real-time bidirectional streaming to browsers and between AimDB instances!

use aimdb_tokio_adapter::TokioAdapter;
use aimdb_websocket_connector::WebSocketConnector;

let db = AimDbBuilder::new()
    .runtime(TokioAdapter::new())
    .with_connector(
        WebSocketConnector::new()
            .bind("0.0.0.0:8080")
            .path("/ws")
            .with_late_join(true),
    );

// Push records to browser clients
builder.configure::<Temperature>(AppKey::Temp, |reg| {
    reg.buffer(BufferCfg::SingleLatest)
       .link_to("ws://temperature");  // ← streams to all subscribers
});

Two modes:

  • πŸ–₯️ Server mode (Axum-based) β€” accept incoming WebSocket connections via link_to("ws://topic")
  • πŸ”— Client mode (tokio-tungstenite) β€” connect out to remote servers via link_to("ws-client://host/topic") for AimDB-to-AimDB sync without a broker
  • πŸ” Authentication via pluggable AuthHandler trait
  • πŸ“‘ MQTT-style wildcards β€” # multi-level, * single-level topic matching
  • ⏱️ Late-join β€” new subscribers receive current values immediately
  • πŸ“‹ StreamableRegistry β€” extensible type-erased dispatch via .register::<T>() with schema-name collision detection

πŸ•ΈοΈ WASM Adapter (New Crate)

Run AimDB in the browser β€” full dataflow engine via WebAssembly!

import { useRecord, useBridge } from 'aimdb-wasm-adapter/react';

function TemperatureDashboard() {
    const bridge = useBridge("ws://localhost:8080/ws");
    const temp = useRecord<Temperature>(bridge, "sensor.temperature");

    return <div>Current: {temp?.celsius}Β°C</div>;
}

Features:

  • πŸ—οΈ Full aimdb-executor trait implementations (RuntimeAdapter, Spawn, TimeOps, Logger)
  • πŸͺΆ Rc<RefCell<…>> buffers β€” zero-overhead for single-threaded WASM
  • 🎯 WasmDb facade via #[wasm_bindgen]: configureRecord, get, set, subscribe
  • πŸŒ‰ WsBridge β€” WebSocket bridge connecting browser to remote AimDB server
  • βš›οΈ React hooks: useRecord<T>, useSetRecord<T>, useBridge
  • πŸ“‹ SchemaRegistry for type-erased record dispatch with extensible .register::<T>() API
  • no_std + alloc compatible (wasm32-unknown-unknown target)

πŸ“‹ Data Contracts β€” Pure Trait Crate

aimdb-data-contracts refocused as a pure trait-definition crate (version reset to 0.1.0).

Trait definitions for self-describing data schemas that work identically across MCU, edge, and cloud:

  • SchemaType β€” unique identity and versioning
  • Streamable β€” capability marker for types crossing serialization boundaries
  • Linkable β€” wire format for connector transport
  • Simulatable β€” test data generation
  • Observable β€” signal extraction for monitoring
  • Migratable β€” schema evolution with MigrationChain and MigrationStep

Concrete contracts (Temperature, Humidity, GpsLocation) and the closed StreamableVisitor dispatcher have been removed β€” see Breaking Changes below.

πŸ—οΈ Code Generation (New Crate)

Architecture-to-code: generate Rust source and Mermaid diagrams from .aimdb/state.toml!

# Generate Mermaid architecture diagram
aimdb generate mermaid

# Generate Rust schema code
aimdb generate schema

# Scaffold a new common crate
aimdb generate common-crate

# Scaffold a hub binary crate
aimdb generate hub-crate

aimdb-codegen features:

  • πŸ“„ ArchitectureState type for reading .aimdb/state.toml decision records
  • πŸ“Š Mermaid diagram generation from architecture state
  • πŸ¦€ Rust source generation β€” value structs, key enums, SchemaType/Linkable impls, configure_schema() functions
  • πŸ—οΈ Common crate, hub crate, and binary crate scaffolding
  • βœ… State validation module for architecture integrity checks

πŸ€– Architecture Agent (MCP)

LLM-driven system design β€” propose, validate, and apply architecture changes through conversation!

The MCP server now includes a full architecture agent with a session state machine:

Idle β†’ Gathering β†’ Proposing β†’ Resolve

16+ new MCP tools:

Tool Description
propose_add_record Add a new record to the architecture
propose_add_connector Add a connector (MQTT, WebSocket, etc.)
propose_modify_buffer Change buffer type/capacity
propose_modify_fields Modify record fields
propose_modify_key_variants Change key variants
remove_record Remove a record
rename_record Rename a record
reset_session Reset agent session state
resolve_proposal Apply or reject pending proposals
save_memory Persist architecture decisions
validate_against_instance Validate architecture against running instance
get_architecture Get current architecture state
get_buffer_metrics Get buffer performance metrics

Architecture MCP resources provide Mermaid diagrams and validation results as live resources.

πŸ“‘ Wire Protocol (New Crate)

Shared wire protocol types for the WebSocket ecosystem.

aimdb-ws-protocol provides ServerMessage and ClientMessage enums used by both the WebSocket connector (server side) and the WASM adapter (browser side). JSON-encoded with "type" discriminant tag.


πŸ”§ Other Changes

aimdb-core

  • Added ws:// and wss:// URL scheme support in ConnectorUrl for WebSocket connectors
  • ConnectorUrl::default_port() now handles WebSocket schemes
  • ConnectorUrl::is_secure() includes wss://

aimdb-cli

  • New aimdb generate subcommand for code generation via aimdb-codegen
  • New aimdb watch subcommand for live record monitoring

Dependency Updates

  • aimdb-knx-connector 0.3.1 β€” Updated Embassy dependency versions (executor 0.10.0, time 0.5.1, sync 0.8.0, futures 0.1.2, net 0.9.0)
  • aimdb-mqtt-connector 0.5.1 β€” Updated Embassy dependency versions (executor 0.10.0, time 0.5.1, sync 0.8.0, net 0.9.0)
  • Embassy submodules updated to latest upstream commits

πŸ“¦ Published Crates

New Crates

Updated

Crate Previous New
aimdb-core 0.5.0 1.0.0
aimdb-data-contracts 0.5.0 0.1.0 (reset β€” pure trait crate)
aimdb-cli 0.5.0 0.6.0
aimdb-mcp 0.5.0 0.6.0
aimdb-mqtt-connector 0.5.0 0.5.1
aimdb-knx-connector 0.3.0 0.3.1

Unchanged


⚠️ Breaking Changes

  • aimdb-data-contracts: Concrete contracts removed from this crate. If you depended on Temperature, Humidity, or GpsLocation from aimdb-data-contracts, define them in your own application crate or shared common crate (see examples/weather-mesh-demo/weather-mesh-common for a reference).
  • aimdb-data-contracts: for_each_streamable() and StreamableVisitor removed. Use .register::<T>() on connector/adapter builders instead.
  • aimdb-data-contracts: ts feature removed (ts-rs dependency dropped).
  • aimdb-data-contracts: Version reset from 1.0.0 to 0.1.0 to reflect the reduced, stabilizing scope as a pure trait crate.

πŸ–₯️ New CLI Commands

# Code generation
aimdb generate mermaid           # Generate Mermaid architecture diagram
aimdb generate schema            # Generate Rust schema code
aimdb generate common-crate      # Scaffold a common crate
aimdb generate hub-crate         # Scaffold a hub binary crate

# Live monitoring
aimdb watch <record>             # Watch live record updates

πŸš€ Migration Guide

Step 1: Update dependencies

[dependencies]
aimdb-core = "1.0"
aimdb-data-contracts = "0.1"  # reset β€” now a pure trait crate
aimdb-tokio-adapter = "0.5"   # unchanged
# Optional: add WebSocket streaming
aimdb-websocket-connector = "0.1"
# Optional: add WASM support
aimdb-wasm-adapter = "0.1"

Step 2: Move concrete contracts to your crate

If you used Temperature, Humidity, or GpsLocation from aimdb-data-contracts, copy them into your own application or shared common crate. See examples/weather-mesh-demo/weather-mesh-common for a working example.

Step 3: Replace StreamableVisitor with .register::<T>()

// Before (closed dispatcher):
// for_each_streamable(visitor);

// After (extensible registry):
let connector = WebSocketConnector::new()
    .register::<Temperature>()
    .r...
Read more

v0.5.0 - Transforms, Persistence, Graph Introspection & Dynamic Routing

22 Feb 18:45
86c8342

Choose a tag to compare

AimDB v0.5.0 Release Notes

Transforms, persistence, graph introspection, and dynamic routing!


🎯 What's New in v0.5.0

This release introduces reactive data transformations, a pluggable persistence layer, a dependency graph introspection API, and dynamic topic/address routing for connectors. It also ships two new crates (aimdb-persistence, aimdb-persistence-sqlite).


✨ Major Features

πŸ”„ Transform API (Design 020)

Reactive data transformations between records β€” computed values that update automatically!

use aimdb_core::{AimDbBuilder, RecordKey};

// Single-input transform: derive Fahrenheit from Celsius
builder.configure::<Celsius>(AppKey::TempCelsius, |reg| {
    reg.buffer(BufferCfg::SingleLatest);
});

builder.configure::<Fahrenheit>(AppKey::TempFahrenheit, |reg| {
    reg.transform_raw(AppKey::TempCelsius, |c: Celsius| {
        Fahrenheit { value: c.value * 9.0 / 5.0 + 32.0 }
    });
});

// Multi-input join: combine humidity + temperature into comfort index
builder.configure::<ComfortIndex>(AppKey::Comfort, |reg| {
    reg.transform_join_raw(|join| {
        join.input::<Celsius>(AppKey::TempCelsius)
            .input::<Humidity>(AppKey::Humidity)
            .on_trigger(|trigger, state| {
                // Called whenever any input changes
                Some(ComfortIndex::compute(state))
            })
    });
});

Features:

  • πŸ”— Single-input transform_raw() for simple derivations
  • πŸ”— Multi-input transform_join_raw() with JoinTrigger event dispatch
  • πŸ”’ Mutual exclusion: a record cannot have both .source() and .transform()
  • πŸš€ Automatic spawning: transforms run as async tasks during AimDb::build()
  • πŸ” Tracing integration: full lifecycle event logging

πŸ“Š Graph Introspection API (Design 021)

Visualize the dependency graph of your AimDB instance!

// Get full dependency graph
let nodes = db.graph_nodes();
let edges = db.graph_edges();
let order = db.graph_topo_order();

// Nodes show origin type and buffer config
for node in &nodes {
    println!("{}: {:?} ({:?})", node.key, node.origin, node.buffer_info);
}

// Edges show data flow
for edge in &edges {
    println!("{} β†’ {} ({:?})", edge.from, edge.to, edge.edge_type);
}

RecordOrigin variants:

  • Source β€” direct producer writes
  • Link β€” connector-bridged external data
  • Transform β€” single-input reactive derivation
  • TransformJoin β€” multi-input reactive join
  • Passive β€” no producer (consumer-only)

Also available via aimdb-client and aimdb-mcp tools.

πŸ’Ύ Persistence Layer (New Crates)

Long-term record history with pluggable backends!

use aimdb_persistence::AimDbBuilderPersistExt;
use aimdb_persistence_sqlite::SqliteBackend;

// Set up SQLite persistence with 7-day retention
let backend = SqliteBackend::new("./aimdb_history.db")?;
builder.with_persistence(backend, Duration::from_secs(7 * 24 * 3600));

// Mark records to persist
builder.configure::<Temperature>(AppKey::Temp, |reg| {
    reg.buffer(BufferCfg::SingleLatest)
       .persist("sensor.temperature");  // ← stored to SQLite
});

let db = builder.build().await?;

// Query historical data
let last_100 = db.query_latest::<Temperature>("sensor.*", Some(100)).await;
let this_week = db.query_range::<Temperature>(
    "sensor.*",
    week_start_ms,
    week_end_ms,
    None,  // no per-record limit
).await;

aimdb-persistence features:

  • PersistenceBackend trait β€” implement your own storage
  • Automatic retention cleanup task (24-hour interval)
  • query_latest, query_range, query_raw APIs

aimdb-persistence-sqlite features:

  • WAL journal mode for concurrent reads
  • Window-function queries for efficient top-N per record
  • * wildcard pattern matching
  • Actor-model writer thread; Clone = O(1) handle copy

🌐 Dynamic Topic/Address Routing (Design 018)

Resolve MQTT topics or KNX group addresses at runtime based on data values!

// Outbound: per-message topic from payload
builder.configure::<SensorReading>(AppKey::Reading, |reg| {
    reg.link_to("mqtt://broker/sensors/default")
       .with_topic_provider(|reading: &SensorReading| {
           format!("sensors/{}/data", reading.sensor_id)
       });
});

// Inbound: late-binding topic from config/discovery
builder.configure::<Command>(AppKey::Command, |reg| {
    reg.link_from("mqtt://broker/")
       .with_topic_resolver(|| {
           // Called once at connector startup
           format!("commands/{}", load_device_id())
       });
});

Works in both std and no_std + alloc environments.

πŸ“‘ Record Drain API (Design 019)

Non-blocking batch pull for accumulated history β€” perfect for LLM analysis!

// Via AimDbClient (remote access)
let response = client.drain_record("sensor.temperature").await?;
println!("Drained {} values", response.count);
// First call is always empty (cold start β€” creates the reader)
// Subsequent calls return everything since last drain

Cold-start semantics: the first drain call creates a reader and returns empty. Subsequent calls return all values accumulated since the previous drain. This enables stateful batch analysis without missing data.


πŸ’₯ Breaking Changes

1. .with_serialization() β†’ .with_remote_access()

// Before (v0.4.x)
reg.buffer(...).with_serialization();

// After (v0.5.0)
reg.buffer(...).with_remote_access();

2. RecordId::new() requires RecordOrigin

This affects custom buffer/record implementations only:

// Before (v0.4.x)
RecordId::new(type_id, idx)

// After (v0.5.0)
RecordId::new(type_id, idx, RecordOrigin::Source)

3. MCP subscription tools removed

The subscribe_record, unsubscribe_record, list_subscriptions, and get_notification_directory MCP tools have been replaced by drain_record. Update any LLM prompts or MCP tool configurations accordingly.


πŸ“¦ Published Crates

New Crates

Updated

Crate Version
aimdb-core 0.5.0
aimdb-tokio-adapter 0.5.0
aimdb-embassy-adapter 0.5.0
aimdb-client 0.5.0
aimdb-sync 0.5.0
aimdb-mqtt-connector 0.5.0
aimdb-knx-connector 0.3.0
aimdb-cli 0.5.0
aimdb-mcp 0.5.0

Unchanged


πŸ”§ New MCP Tools

The aimdb-mcp server now provides these tools:

Tool Description
discover_instances Find running AimDB servers
list_records List all records with metadata
get_record Get current value
set_record Set writable record value
query_schema Infer JSON Schema from value
get_instance_info Server version and capabilities
drain_record Batch pull accumulated history
graph_nodes All graph nodes with origin/buffer info
graph_edges Directed data-flow edges
graph_topo_order Topological record ordering

πŸ–₯️ New CLI Commands (aimdb graph)

# List all graph nodes (color-coded by origin)
aimdb graph nodes

# Show directed data-flow edges
aimdb graph edges

# Show topological (spawn) order
aimdb graph order

# Export to Graphviz DOT format
aimdb graph dot > pipeline.dot
dot -Tsvg pipeline.dot -o pipeline.svg

πŸš€ Migration Guide

Step 1: Update dependencies

[dependencies]
aimdb-core = "0.5.0"
aimdb-tokio-adapter = "0.5.0"
# Optional: add persistence
aimdb-persistence = "0.1.0"
aimdb-persistence-sqlite = "0.1.0"

Step 2: Rename .with_serialization() β†’ .with_remote_access()

# Quick find & replace
grep -rn "with_serialization" src/
# Replace with:
#   .with_remote_access()

Step 3: Update RecordId::new() calls (custom buffer implementations only)

// Add RecordOrigin argument
RecordId::new(type_id, idx, RecordOrigin::Source)

Step 4: Update MCP tool usage

Replace subscribe_record / unsubscribe_record workflows with drain_record polling:

# Old: subscribe β†’ wait β†’ unsubscribe
# New: call drain_record periodically (first call always empty)
drain_record(socket_path: "...", record_name: "sensor.*")

πŸ“š Examples

All examples updated for v0.5.0:

git clone https://github.com/aimdb-dev/aimdb.git && cd aimdb

# MQTT connector demo
cargo run -p tokio-mqtt-connector-demo

# KNX connector demo
cargo run -p tokio-knx-connector-demo

# Sync API demo
cargo run -p sync-api-demo

# Weather mesh demo (multi-station)
cargo run -p weather-hub

# Embedded (cross-compile)
cd examples/embassy-mqtt-connector-demo
cargo build --release --target thumbv7em-none-eabihf

🀝 Contributing

git clone https://github.com/aimdb-dev/aimdb.git
cd aimdb
make check  # fmt + clippy + test + embedded cross-compile

πŸ“„ License

Apache License 2.0 β€” see LICENSE.


v0.4.0

25 Dec 20:44

Choose a tag to compare

AimDB v0.4.0 Release Notes

Compile-time safe record keys with #[derive(RecordKey)] macro!


🎯 What's New in v0.4.0

This release introduces compile-time safe record keys via a new derive macro, transforming RecordKey from a struct to a trait. This enables user-defined enum keys with automatic string representation, eliminating runtime key typos and improving embedded system efficiency.


✨ Major Features

πŸ”‘ RecordKey Trait + Derive Macro

New crate aimdb-derive provides #[derive(RecordKey)] for compile-time checked keys!

Instead of error-prone string literals, define type-safe enum keys:

use aimdb_core::RecordKey;

#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
#[key_prefix = "sensor"]
pub enum SensorKey {
    #[key = "temp.indoor"]
    TempIndoor,
    
    #[key = "temp.outdoor"]
    TempOutdoor,
    
    #[key = "humidity"]
    #[link_address = "zigbee/sensors/humidity"]  // MQTT topic
    Humidity,
}

// Compile-time typo detection!
let producer = db.producer::<Temperature>(SensorKey::TempIndoor)?;
// vs runtime error with string: db.producer::<Temperature>("sensor.temp.indor")?;

Benefits:

  • πŸ›‘οΈ Compile-time safety: Typos caught at build time
  • πŸš€ Zero-allocation: Enum variants are Copy, no heap allocation
  • πŸ“ Connector metadata: #[link_address = "..."] for MQTT topics, KNX addresses
  • πŸ”§ IDE support: Autocomplete and refactoring work correctly
  • πŸ“¦ no_std compatible: Works on embedded targets

πŸ“ StringKey Type

The previous RecordKey struct is now StringKey with improved memory model:

use aimdb_core::StringKey;

// Static keys (zero allocation)
let key: StringKey = "sensors.temp".into();

// Dynamic keys (interned via Box::leak for O(1) Copy/Clone)
let key = StringKey::intern(dynamic_string);

Memory Model:

  • Static(&'static str) - Zero-allocation for string literals
  • Interned(&'static str) - Uses Box::leak for O(1) cloning
  • Designed for startup-time registration (<1000 keys)
  • Debug warning if >1000 interned keys

πŸ› MQTT Connector Fix

Fixed initialization deadlock when subscribing to >10 MQTT topics (Issue #63)

The fix implements:

  1. Spawn-before-subscribe: Event loop spawned before topic subscriptions
  2. Dynamic channel capacity: Scales with topic count (topics + 10)
  3. Proper task yielding: Ensures scheduler runs between operations

Also upgraded rumqttc from 0.24 to 0.25.


πŸ’₯ Breaking Changes

1. RecordKey: Struct β†’ Trait

RecordKey is now a trait, not a struct.

// Before (v0.3.x)
use aimdb_core::RecordKey;
let key: RecordKey = "sensors.temp".into();

// After (v0.4.0)
use aimdb_core::StringKey;
let key: StringKey = "sensors.temp".into();

// Or use derive macro (recommended)
use aimdb_core::RecordKey;  // Now a trait

#[derive(RecordKey, Clone, Copy, PartialEq, Eq)]
pub enum AppKey {
    #[key = "sensors.temp"]
    SensorsTemp,
}

2. RecordKey Trait Bounds

If you have generic code over record keys:

// Before (v0.3.x)
fn process_key(key: RecordKey) { ... }

// After (v0.4.0)
fn process_key<K: RecordKey>(key: K) { ... }
// Or with StringKey specifically:
fn process_key(key: StringKey) { ... }

πŸ“¦ Published Crates

New Crate

Updated to v0.4.0

Unchanged


πŸš€ Quick Start

Using Derive Macro (Recommended)

use aimdb_core::{AimDbBuilder, RecordKey, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;

// Define type-safe keys
#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
pub enum AppKey {
    #[key = "sensors.temperature"]
    #[link_address = "home/sensors/temp"]  // MQTT topic
    Temperature,
    
    #[key = "sensors.humidity"]
    #[link_address = "home/sensors/humidity"]
    Humidity,
    
    #[key = "config.settings"]
    Settings,
}

#[derive(Clone, Debug, Serialize, Deserialize)]
struct SensorReading {
    value: f32,
    timestamp: u64,
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let runtime = Arc::new(TokioAdapter::new()?);
    let mut builder = AimDbBuilder::new().runtime(runtime);

    // Register with type-safe keys
    builder.configure::<SensorReading>(AppKey::Temperature, |reg| {
        reg.buffer(BufferCfg::SingleLatest);
    });

    builder.configure::<SensorReading>(AppKey::Humidity, |reg| {
        reg.buffer(BufferCfg::SingleLatest);
    });

    let db = builder.build().await?;

    // Type-safe producer access
    let temp_producer = db.producer::<SensorReading>(AppKey::Temperature)?;
    let humidity_producer = db.producer::<SensorReading>(AppKey::Humidity)?;

    // Produce data
    temp_producer.produce(SensorReading { value: 22.5, timestamp: 1000 }).await?;
    humidity_producer.produce(SensorReading { value: 65.0, timestamp: 1001 }).await?;

    // Access link address for connector metadata
    println!("Temperature MQTT topic: {:?}", AppKey::Temperature.link_address());
    // Prints: Some("home/sensors/temp")

    Ok(())
}

Using StringKey (Dynamic Keys)

use aimdb_core::{AimDbBuilder, StringKey, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use std::sync::Arc;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let runtime = Arc::new(TokioAdapter::new()?);
    let mut builder = AimDbBuilder::new().runtime(runtime);

    // Static string literals (zero allocation)
    builder.configure::<String>("app.logs", |reg| {
        reg.buffer(BufferCfg::SpmcRing { capacity: 1000 });
    });

    // Dynamic keys (for runtime-determined keys)
    let tenant_id = "tenant-123";
    let dynamic_key = StringKey::intern(format!("tenants.{}.data", tenant_id));
    builder.configure::<String>(dynamic_key, |reg| {
        reg.buffer(BufferCfg::Mailbox);
    });

    let db = builder.build().await?;
    Ok(())
}

πŸ“š Migration Guide

Step 1: Update Dependencies

[dependencies]
aimdb-core = "0.4.0"
aimdb-tokio-adapter = "0.4.0"
# aimdb-mqtt-connector = "0.4.0"  # If using MQTT

# Enable derive macro (included by default)
# aimdb-core = { version = "0.4.0", features = ["derive"] }

Step 2: Choose Key Strategy

Option A: Keep String Literals (Minimal Change)

// Before (v0.3.x)
builder.configure::<Temperature>("sensors.temp", |reg| { ... });

// After (v0.4.0) - No change needed! &'static str implements RecordKey
builder.configure::<Temperature>("sensors.temp", |reg| { ... });

Option B: Adopt Derive Macro (Recommended)

// Define keys once
#[derive(RecordKey, Clone, Copy, PartialEq, Eq)]
pub enum Keys {
    #[key = "sensors.temp"]
    SensorsTemp,
}

// Use everywhere
builder.configure::<Temperature>(Keys::SensorsTemp, |reg| { ... });
let producer = db.producer::<Temperature>(Keys::SensorsTemp)?;

Step 3: Update RecordKey Imports

// Before (v0.3.x)
use aimdb_core::RecordKey;
let key: RecordKey = "name".into();

// After (v0.4.0)
use aimdb_core::StringKey;
let key: StringKey = "name".into();

Step 4: Update Generic Code

// Before (v0.3.x)
fn process(key: RecordKey) { ... }

// After (v0.4.0)
fn process<K: RecordKey>(key: K) { ... }
// Or specifically:
fn process(key: impl RecordKey) { ... }
fn process(key: StringKey) { ... }

🎯 Derive Macro Reference

Attributes

Attribute Level Required Description
#[key = "..."] Variant Yes String representation for the key
#[key_prefix = "..."] Enum No Prefix prepended to all variant keys
#[link_address = "..."] Variant No Connector metadata (MQTT topic, KNX address)

Example with All Features

use aimdb_core::RecordKey;

#[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)]
#[key_prefix = "home.automation"]  // Applied to all variants
pub enum HomeKey {
    #[key = "lights.living"]
    #[link_address = "1/1/1"]  // KNX group address
    LightsLiving,
    
    #[key = "lights.bedroom"]
    #[link_address = "1/1/2"]
    LightsBedroom,
    
    #[key = "thermostat"]
    #[link_address = "mqtt://home/thermostat"]
    Thermostat,
}

// Generated methods:
// HomeKey::LightsLiving.as_str() -> "home.automation.lights.living"
// HomeKey::LightsLiving.link_address() -> Some("1/1/1")

Compile-Time Validation

The macro validates at compile time:

  • βœ… All variants have #[key = "..."] attribute
  • βœ… No duplicate keys (including after prefix)
  • βœ… Only unit variants (no tuple/struct variants)
// Compile error: duplicate key
#[derive(RecordKey)]
pub enum BadKeys {
    #[key = "same"]
    First,
    #[key = "same"]  // Error: duplicate key "same"
    Second,
}

// Compile error: missing key attribute
#[derive(RecordKey)]
pub enum BadKeys {
    #[key = "valid"]
    First,
    Second,  // Error: missing #[key = "..."] attribute
}

πŸ› Bug Fixes

MQTT Connector Deadlock (Issue #63)

Problem: When subscribing to more than 10 MQTT topics, the connector would deadlock during initialization.

Root Cause: The internal channel had a fixed capacity of 10, and subscriptions were made before spawning the event loop, causing the channel to fill up and block.

**Solu...

Read more

v0.3.0

06 Dec 20:01

Choose a tag to compare

AimDB v0.3.0 Release Notes

Major update with RecordId/RecordKey architecture and buffer metrics!


🎯 What's New in v0.3.0

This release introduces a complete architectural overhaul of AimDB's internal record storage system, enabling multi-instance records (same type, different keys), stable O(1) indexing, and comprehensive buffer metrics. The new RecordId/RecordKey system provides both the performance benefits of numeric indexing and the usability of human-readable keys.


✨ Major Features

πŸ”‘ RecordId + RecordKey Architecture

Complete rewrite of internal storage for stable record identification!

AimDB now supports multiple records of the same type with unique keys, enabling patterns like:

  • Multiple sensors of the same type (Temperature) with different keys ("sensors.indoor", "sensors.outdoor")
  • Multi-tenant configurations ("tenant.a.config", "tenant.b.config")
  • Regional data streams ("region.us.metrics", "region.eu.metrics")

Key Components:

  • RecordId: u32 index wrapper for O(1) Vec-based hot-path access
  • RecordKey: Hybrid &'static str / Arc<str> with zero-alloc static keys and flexible dynamic keys
  • O(1) key resolution via HashMap<RecordKey, RecordId>
  • Type introspection via HashMap<TypeId, Vec<RecordId>>

New API:

use aimdb_core::{AimDbBuilder, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;

#[derive(Clone, Debug, Serialize, Deserialize)]
struct Temperature {
    celsius: f32,
    sensor_id: String,
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let runtime = Arc::new(TokioAdapter::new()?);
    
    let mut builder = AimDbBuilder::new().runtime(runtime);

    // Register MULTIPLE records of the same type with different keys
    builder.configure::<Temperature>("sensors.indoor", |reg| {
        reg.buffer(BufferCfg::SingleLatest);
    });

    builder.configure::<Temperature>("sensors.outdoor", |reg| {
        reg.buffer(BufferCfg::SingleLatest);
    });

    let db = builder.build().await?;

    // Key-based access for multi-instance records
    let indoor_producer = db.producer_by_key::<Temperature>("sensors.indoor")?;
    let outdoor_producer = db.producer_by_key::<Temperature>("sensors.outdoor")?;

    indoor_producer.produce(Temperature { celsius: 22.5, sensor_id: "indoor-1".into() }).await?;
    outdoor_producer.produce(Temperature { celsius: 15.2, sensor_id: "outdoor-1".into() }).await?;

    // Introspection
    let temp_ids = db.records_of_type::<Temperature>();  // Returns &[RecordId] with 2 IDs
    let id = db.resolve_key("sensors.indoor");           // Returns Option<RecordId>

    Ok(())
}

Key Naming Conventions:

  • Use dot-separated hierarchical names: "sensors.indoor", "config.app"
  • Keys must be unique across all records (duplicate keys panic at registration)
  • Static string literals ("key") are zero-allocation via &'static str
  • Dynamic keys (String::from("key")) use Arc<str> for efficient cloning

πŸ“Š Buffer Metrics API (Feature-Gated)

Comprehensive buffer introspection for monitoring and debugging!

Enable buffer metrics with the metrics feature flag:

[dependencies]
aimdb-core = { version = "0.3.0", features = ["metrics"] }
aimdb-tokio-adapter = { version = "0.3.0", features = ["metrics"] }

New Metrics:

use aimdb_core::buffer::BufferMetricsSnapshot;

// Get metrics from any record
let metadata = db.record_metadata::<Temperature>("sensors.indoor")?;

if let Some(metrics) = metadata.buffer_metrics {
    println!("Produced: {}", metrics.produced_count);
    println!("Consumed: {}", metrics.consumed_count);
    println!("Dropped: {}", metrics.dropped_count);
    println!("Occupancy: {}/{}", metrics.occupancy.0, metrics.occupancy.1);
}

Available Metrics:

  • produced_count: Total items pushed to the buffer
  • consumed_count: Total items consumed across all readers
  • dropped_count: Total items dropped due to lag (per-reader semantics documented)
  • occupancy: Current buffer fill level as (current, capacity) tuple

Supported Buffers:

  • βœ… SPMC Ring Buffer: Full metrics support
  • βœ… SingleLatest: Full metrics support
  • βœ… Mailbox: Full metrics support

Tokio Adapter: Full implementation with atomic counters
Embassy Adapter: Feature flag present (API consistency), but metrics not functional on embedded targets (requires std)

πŸ” Enhanced Introspection API

New methods for exploring records at runtime:

// Find all records of a specific type
let temperature_records = db.records_of_type::<Temperature>();
for record_id in temperature_records {
    let metadata = db.record_metadata_by_id(*record_id)?;
    println!("Temperature record: {} (key: {})", record_id.0, metadata.record_key);
}

// Resolve key to RecordId
if let Some(record_id) = db.resolve_key("sensors.indoor") {
    println!("Found record with ID: {}", record_id.0);
}

// Key-bound producers/consumers
let producer = db.producer_by_key::<Temperature>("sensors.indoor")?;
let consumer = db.consumer_by_key::<Temperature>("sensors.outdoor")?;

println!("Producer key: {}", producer.key());
println!("Consumer key: {}", consumer.key());

πŸ—οΈ Internal Architecture Improvements

Optimized storage for sub-50ms latency:

Before (v0.2.0):

BTreeMap<TypeId, Box<dyn AnyRecord>>  // O(log n) lookups

After (v0.3.0):

Vec<Box<dyn AnyRecord>>                      // O(1) hot-path access by RecordId
HashMap<RecordKey, RecordId>                 // O(1) name lookups
HashMap<TypeId, Vec<RecordId>>               // O(1) type introspection

Performance Benefits:

  • βœ… O(1) hot-path access (was O(log n))
  • βœ… Stable RecordId across application lifetime
  • βœ… Zero-allocation static keys
  • βœ… Efficient multi-instance type lookups

πŸ”¨ Breaking Changes

1. Record Registration API

All records now require a key parameter:

// Before (v0.2.x)
builder.configure::<Temperature>(|reg| {
    reg.buffer(BufferCfg::SingleLatest);
});

// After (v0.3.0)
builder.configure::<Temperature>("sensor.temperature", |reg| {
    reg.buffer(BufferCfg::SingleLatest);
});

2. Type-Based Lookup Ambiguity

If you register multiple records of the same type, type-based methods return AmbiguousType error:

// With multiple Temperature records registered...
db.produce(temp).await  // ❌ Returns Err(AmbiguousType { count: 2, ... })

// Use key-based methods instead:
db.produce_by_key("sensors.indoor", temp).await  // βœ… Works correctly

Migration Strategy:

  • Single-instance records: Type-based API still works (produce(), subscribe(), etc.)
  • Multi-instance records: Use key-based API (produce_by_key(), subscribe_by_key(), etc.)

3. DynBuffer Implementation

Custom buffer implementations must now explicitly implement DynBuffer<T>:

// Before (v0.2.x) - automatic via blanket impl
impl<T: Clone + Send> Buffer<T> for MyBuffer<T> { ... }
// DynBuffer was automatically implemented

// After (v0.3.0) - explicit implementation required
impl<T: Clone + Send> Buffer<T> for MyBuffer<T> { ... }

impl<T: Clone + Send + 'static> DynBuffer<T> for MyBuffer<T> {
    fn push(&self, value: T) {
        <Self as Buffer<T>>::push(self, value)
    }
    
    fn subscribe_boxed(&self) -> Box<dyn BufferReader<T> + Send> {
        Box::new(self.subscribe())
    }
    
    fn as_any(&self) -> &dyn core::any::Any {
        self
    }
    
    // Optional: implement metrics_snapshot() if you support metrics
    #[cfg(feature = "metrics")]
    fn metrics_snapshot(&self) -> Option<BufferMetricsSnapshot> {
        None // or Some(...) if you track metrics
    }
}

Why this change? Enables adapters to provide metrics_snapshot() when the metrics feature is enabled.

4. RecordMetadata Changes

New fields added to RecordMetadata:

pub struct RecordMetadata {
    pub record_id: u32,           // ← NEW: Stable numeric identifier
    pub record_key: String,       // ← NEW: Human-readable key
    pub type_id: u64,
    pub type_name: String,
    pub buffer_type: String,
    pub buffer_capacity: Option<usize>,
    pub producer_count: usize,
    pub consumer_count: usize,
    pub outbound_connector_count: usize,
    pub inbound_connector_count: usize,
    #[cfg(feature = "metrics")]
    pub buffer_metrics: Option<BufferMetricsSnapshot>,  // ← NEW: Buffer metrics
}

πŸ“¦ Published Crates

Updated to v0.3.0

Updated to v0.2.0

Unchanged


πŸš€ Quick Start

Multi-Instance Records Example

use aimdb_core::{AimDbBuilder, buffer::BufferCfg};
use aimdb_tokio_adapter::TokioAdapter;
use serde::{Serialize, Deserialize};
use std::sync::Arc;

#[derive(Clone, Debug, Serialize, Deserialize)]
struct SensorReading {...
Read more

v0.2.0

20 Nov 21:22

Choose a tag to compare

Summary

This release introduces bidirectional connector support, enabling true two-way data synchronization between AimDB and external systems. The new architecture supports simultaneous publishing and subscribing with automatic message routing, working seamlessly across both Tokio (std) and Embassy (no_std) runtimes.

Highlights

  • πŸ”„ Bidirectional Connectors: New .link_to() and .link_from() APIs for clear directional data flows
  • 🎯 Type-Erased Router: Automatic routing of incoming messages to correct typed producers
  • πŸ—οΈ ConnectorBuilder Pattern: Simplified connector registration with automatic initialization
  • πŸ“‘ Enhanced MQTT Connector: Complete rewrite supporting simultaneous pub/sub with automatic reconnection
  • 🌐 Embassy Network Integration: Connectors can now access Embassy's network stack for network operations
  • πŸ“š Comprehensive Guide: New 1000+ line connector development guide with real-world examples

Breaking Changes

  • aimdb-core: .build() is now async; connector registration changed from .with_connector(scheme, instance) to .with_connector(builder)
  • aimdb-core: .link() deprecated in favor of .link_to() (outbound) and .link_from() (inbound)
  • aimdb-core: Outbound connector architecture refactored to trait-based system:
    • Removed: TypedRecord::spawn_outbound_consumers() method (was called automatically)
    • Added: ConsumerTrait, AnyReader traits for type-erased outbound routing
    • Added: AimDb::collect_outbound_routes() method to gather configured routes
    • Required: Connectors must implement spawn_outbound_publishers() and call it in ConnectorBuilder::build()
  • aimdb-mqtt-connector: API changed from MqttConnector::new() to MqttConnectorBuilder::new(); automatic task spawning removes need for manual background task management
  • aimdb-mqtt-connector: Added spawn_outbound_publishers() method; must be called in build() for outbound publishing to work

Modified Crates

See individual changelogs for detailed changes:

Migration Guide

1. Update connector registration:

// Old (v0.1.0)
let mqtt = MqttConnector::new("mqtt://broker:1883").await?;
builder.with_connector("mqtt", Arc::new(mqtt))

// New (v0.2.0)
builder.with_connector(MqttConnectorBuilder::new("mqtt://broker:1883"))

2. Make build() async:

// Old (v0.1.0)
let db = builder.build()?;

// New (v0.2.0)
let db = builder.build().await?;

3. Use directional link methods:

// Old (v0.1.0)
.link("mqtt://sensors/temp")

// New (v0.2.0)
.link_to("mqtt://sensors/temp")    // For publishing (AimDB β†’ External)
.link_from("mqtt://commands/temp")  // For subscribing (External β†’ AimDB)

4. Remove manual MQTT task spawning (Embassy):

// Old (v0.1.0) - Manual task spawning required
let mqtt_result = MqttConnector::create(...).await?;
spawner.spawn(mqtt_task(mqtt_result.task))?;
builder.with_connector("mqtt", Arc::new(mqtt_result.connector))

// New (v0.2.0) - Automatic task spawning
builder.with_connector(MqttConnectorBuilder::new("mqtt://broker:1883"))
// Tasks spawn automatically during build()

5. Update custom connectors to spawn outbound publishers:

If you've implemented a custom connector, you must add spawn_outbound_publishers() support:

// Old (v0.1.0) - Outbound consumers spawned automatically
impl ConnectorBuilder for MyConnectorBuilder {
    fn build<R>(&self, db: &AimDb<R>) -> DbResult<Arc<dyn Connector>> {
        // ... setup code ...
        Ok(Arc::new(MyConnector { /* fields */ }))
    }
    // Outbound publishing happened automatically via TypedRecord::spawn_outbound_consumers()
}

// New (v0.2.0) - Must explicitly spawn outbound publishers
impl ConnectorBuilder for MyConnectorBuilder {
    fn build<R>(&self, db: &AimDb<R>) -> DbResult<Arc<dyn Connector>> {
        // ... setup code ...
        let connector = MyConnector { /* fields */ };
        
        // REQUIRED: Collect and spawn outbound publishers
        let outbound_routes = db.collect_outbound_routes(self.protocol_name());
        connector.spawn_outbound_publishers(db, outbound_routes)?;
        
        Ok(Arc::new(connector))
    }
}

// REQUIRED: Implement spawn_outbound_publishers method
impl MyConnector {
    fn spawn_outbound_publishers<R: RuntimeAdapter + 'static>(
        &self,
        db: &AimDb<R>,
        routes: Vec<(String, Box<dyn ConsumerTrait>, SerializerFn, Vec<(String, String)>)>,
    ) -> DbResult<()> {
        for (topic, consumer, serializer, _config) in routes {
            let client = self.client.clone();
            let topic_clone = topic.clone();
            
            db.runtime().spawn(async move {
                // Subscribe to record updates using ConsumerTrait
                match consumer.subscribe_any().await {
                    Ok(mut reader) => {
                        loop {
                            match reader.recv_any().await {
                                Ok(value) => {
                                    // Serialize and publish
                                    if let Ok(bytes) = serializer(&*value) {
                                        let _ = client.publish(&topic_clone, bytes).await;
                                    }
                                }
                                Err(_) => break,
                            }
                        }
                    }
                    Err(_) => { /* Log error */ }
                }
            })?;
        }
        Ok(())
    }
}

Why this change? The new trait-based architecture provides:

  • βœ… Symmetry with inbound routing (ProducerTrait ↔ ConsumerTrait)
  • βœ… Testability (can mock ConsumerTrait without real records)
  • βœ… Type safety via factory pattern (type capture at configuration time)
  • βœ… Maintainability (connector logic stays in connector crate)

Migration checklist for custom connectors:

  • Add spawn_outbound_publishers() method to connector implementation
  • Call db.collect_outbound_routes(protocol_name) in ConnectorBuilder::build()
  • Call connector.spawn_outbound_publishers(db, routes)? before returning
  • Use ConsumerTrait::subscribe_any() to get type-erased readers
  • Handle serialization with provided SerializerFn
  • Test both inbound (.link_from()) and outbound (.link_to()) data flows

See Connector Development Guide for complete examples.

Documentation

  • Added comprehensive Connector Development Guide
  • Updated MQTT connector examples for both Tokio and Embassy
  • Enhanced API documentation with bidirectional patterns

v0.1.0

06 Nov 21:57

Choose a tag to compare

Added

Core Database (aimdb-core)

  • Initial release of AimDB async in-memory database engine
  • Type-safe record system using TypeId-based routing
  • Three buffer types for different data flow patterns:
    • SPMC Ring Buffer: High-frequency data streams with bounded memory
    • SingleLatest: State synchronization and configuration updates
    • Mailbox: Commands and one-shot events
  • Producer-consumer model with async task spawning
  • Runtime adapter abstraction for cross-platform support
  • no_std compatibility for embedded targets
  • Error handling with comprehensive DbResult<T> and DbError types
  • Remote access protocol (AimX v1) for cross-process introspection

Runtime Adapters

  • Tokio Adapter (aimdb-tokio-adapter): Full-featured std runtime support
    • Lock-free buffer implementations
    • Configurable buffer capacities
    • Comprehensive async task spawning
  • Embassy Adapter (aimdb-embassy-adapter): Embedded no_std runtime support
    • Configurable task pool sizes (8/16/32 concurrent tasks)
    • Optimized for resource-constrained devices
    • Compatible with ARM Cortex-M targets

MQTT Connector (aimdb-mqtt-connector)

  • Dual runtime support for both Tokio and Embassy
  • Automatic consumer registration via builder pattern
  • Topic mapping with QoS and retain configuration
  • Pluggable serializers (JSON, MessagePack, Postcard, custom)
  • Automatic reconnection handling
  • Uses rumqttc for std environments
  • Uses mountain-mqtt for embedded environments

Developer Tools

  • MCP Server (aimdb-mcp): LLM-powered introspection and debugging
    • Discover running AimDB instances
    • Query record values and schemas
    • Subscribe to real-time updates
    • Set writable record values
    • JSON Schema inference from record values
    • Notification persistence to JSONL files
  • CLI Tool (aimdb-cli): Command-line interface (skeleton)
    • Instance discovery and management commands
    • Record inspection capabilities
    • Real-time watch functionality
  • Client Library (aimdb-client): Reusable connection and discovery logic
    • Unix domain socket communication
    • AimX v1 protocol implementation
    • Clean error handling

Synchronous API (aimdb-sync)

  • Blocking wrapper around async AimDB core
  • Thread-safe synchronous record access
  • Automatic Tokio runtime management
  • Ideal for gradual migration from sync to async

Documentation & Examples

  • Comprehensive README with architecture overview
  • Individual crate documentation with examples
  • 12 detailed design documents in /docs/design
  • Working examples:
    • tokio-mqtt-connector-demo: Full Tokio MQTT integration
    • embassy-mqtt-connector-demo: Embedded RP2040 with WiFi MQTT
    • sync-api-demo: Synchronous API usage patterns
    • remote-access-demo: Cross-process introspection server

Build & CI Infrastructure

  • Comprehensive Makefile with color-coded output
  • GitHub Actions workflows:
    • Continuous integration (format, lint, test)
    • Security audits (weekly schedule + on-demand)
    • Documentation generation
    • Release automation
  • Cross-compilation testing for thumbv7em-none-eabihf target
  • cargo-deny configuration for license and dependency auditing
  • Dev container setup for consistent development environment

Design Goals Achieved

  • βœ… Sub-50ms latency for real-time synchronization
  • βœ… Lock-free buffer operations
  • βœ… Cross-platform support (MCU β†’ edge β†’ cloud)
  • βœ… Type safety with zero-cost abstractions
  • βœ… Protocol-agnostic connector architecture

Known Limitations

  • Kafka and DDS connectors planned for future releases
  • CLI tool is currently skeleton implementation
  • Performance benchmarks not yet included
  • Limited to Unix domain sockets for remote access (no TCP yet)

Dependencies

  • Rust 1.75+ required
  • Tokio 1.47+ for std environments
  • Embassy 0.9+ for embedded environments
  • See deny.toml for approved dependency licenses

Breaking Changes

None (initial release)

Migration Guide

Not applicable (initial release)


Release Notes

v0.1.0 - "Foundation Release"

AimDB v0.1.0 establishes the foundational architecture for async, in-memory data synchronization across MCU β†’ edge β†’ cloud environments. This release focuses on core functionality, dual runtime support, and developer tooling.

Highlights:

  • πŸš€ Dual runtime support: Works on both standard library (Tokio) and embedded (Embassy)
  • πŸ”’ Type-safe record system eliminates runtime string lookups
  • πŸ“¦ Three buffer types cover most real-time data patterns
  • πŸ”Œ MQTT connector works in both std and no_std environments
  • πŸ€– MCP server enables LLM-powered introspection
  • βœ… 27+ core tests, comprehensive CI/CD, security auditing

Get Started:

cargo add aimdb-core aimdb-tokio-adapter

See README.md for quickstart guide and examples.

Feedback Welcome:
This is an early release. Please report issues, suggest features, or contribute at:
https://github.com/aimdb-dev/aimdb