Why Vectorless?
RAG without the baggage.
Reasoning-Native
LLMs navigate hierarchical document trees with semantic understanding — not vector proximity.
No Vector Database
Eliminate embedding pipelines, vector stores, and similarity search entirely. Trees are the index.
Rust-Powered
Core engine in Rust with Python bindings. Arena-based trees, async I/O, and zero-copy traversal.
Multi-Algorithm Search
Beam search, MCTS, and greedy algorithms with LLM-guided Pilot at key decision points.
Explainable Results
Full reasoning chain traces every navigation decision. Audit how and why content was retrieved.
PDF & Markdown
Index PDFs and Markdown out of the box. Hierarchical structure extracted automatically.
How It Works
Index
Parse documents into hierarchical semantic trees with LLM-generated summaries.
Navigate
Pilot uses LLM to navigate the tree at key forks — beam search explores multiple paths in parallel.
Retrieve
Evaluate sufficiency and backtrack if needed. Aggregate only the most relevant content within budget.
