These guides show how to build common AI application patterns using NodeDB's existing engines. Each guide includes production-ready SQL examples.
NodeDB provides the storage and retrieval layer. Your application handles chunking, embedding, reranking, and LLM generation. The boundary is clean: we store, index, search, and fuse. You chunk, embed, rerank, and generate.
- RAG Pipelines — Basic RAG, hybrid vector+BM25, filtered retrieval, parent-document, conversational RAG
- GraphRAG — Entity extraction, seed retrieval + graph expansion, community summarization, disambiguation
- Multi-Modal Search — Multiple vector columns, cross-modal CLIP search, multi-modal RRF fusion, ColBERT multi-vector
- Agent Memory — Episodic (conversation logs), semantic (distilled facts), working (KV + TTL), scheduled consolidation
- Feature Store — Columnar engine for training features, point-in-time lookups, batch export, online serving
- Evaluation Tracking — Experiment metrics, retriever comparison, drift detection
- On-Device AI — NodeDB-Lite vector search, offline RAG, CRDT sync, WASM deployment, privacy
- CDC for Inference Triggers — Change streams for embedding pipelines, graph re-indexing, model output routing
- Multi-Tenancy for AI SaaS — WAL-level tenant isolation, RLS during vector search, per-tenant budgets