This repository is the "killer demo" for the Lár Engine — an open-source, glass box framework for building auditable, self-correcting AI agents.
This app is a live, interactive RAG agent that doesn't just give you an answer.
It shows you its work.
You can watch the agent plan, retrieve, write, critique itself, and loop until the final answer is strong, grounded, and verifiable.
| Other Frameworks (“Black Box”) | Lár (“Glass Box”) |
|---|---|
| Returns a single answer with no trace. | Shows a full execution trace with diffs. |
| Silent failures — no idea why an agent messed up. | Every step’s state changes are visible and auditable. |
| Debugging = guesswork. | Debugging = inspecting exact node transitions. |
| “I don’t know what happened.” | “I see exactly what happened.” |
This demo intentionally creates a self-correcting loop so you can see Lár in full power.
You don't have to guess. Here is a real log from this demo after hitting the Gemini API's free-tier rate limit.
Instead of a generic 500 ERROR, the lar engine caught the failure, logged the exact reason, and ended the graph gracefully.
Execution Summary:
| Step | Node | Outcome | Key Changes |
|---|---|---|---|
| 0 | LLMNode |
success |
+ ADDED: 'search_query' |
| 1 | ToolNode |
success |
+ ADDED: 'retrieved_context' |
| 2 | LLMNode |
success |
+ ADDED: 'draft_answer' |
| 3 | LLMNode |
error |
+ ADDED: 'error': "APIConnectionError" |
This is the lar difference. You know the exact node (LLMNode), the exact step (3), and the exact reason (APIConnectionError) for the failure. You can't debug a "black box," but you can always fix a "glass box."
This project runs with two main scripts:
build_index.py— builds a FAISS vector storerag_app.py— launches the interactive Streamlit GlassBox RAG app
- Python 3.10+
- Poetry
- Google Gemini API Key
- macOS/Linux/WSL/Windows
git clone https://github.com/snath-ai/rag-demo.git
cd rag-demo
poetry installCreate a .env file:
GOOGLE_API_KEY="YOUR_API_KEY_HERE"poetry run python build_index.pyThis creates your FAISS index in vector_store/.
poetry run streamlit run rag_app.pyOpen in browser:
http://localhost:8501
Ask:
“What is the recommendation for Project Astra-7?”
Watch the agent plan → retrieve → draft → critique → revise → finalize.
This demo integrates:
- Local similarity search (FAISS)
- Cloud LLM reasoning (Gemini)
- Critique-driven self-correction
- Lár’s define-by-run execution
- Full state diff visibility
graph TD
A[User Question] --> B[Planner LLMNode<br/>'Plan Query']
B --> C[Retriever ToolNode<br/>'FAISS Search']
C --> D[Writer LLMNode<br/>'Draft Answer']
D --> E[Critic LLMNode<br/>'Critique & Score']
E --> F{RouterNode<br/>'Is It Good Enough?'}
F -- "no" --> G[Reviser LLMNode<br/>'Improve Answer']
G --> D
F -- "yes" --> H[Finalize Node<br/>'Return Answer']
- Planner interprets question
- Retriever gathers relevant context
- Writer drafts
- Critic evaluates
- Router decides:
- weak → revise loop
- strong → finalize
rag-demo/
├── build_index.py
├── rag_app.py
├── document_text.txt
├── vector_store/
├── .env
├── pyproject.toml
└── README.md
If you build an agent using the Lár Engine, you are building a dependable, verifiable system. Help us spread the philosophy of the "Glass Box" by displaying the badge below in your project's README.
By adopting this badge, you signal to users and collaborators that your agent is built for production reliability and auditability.
Show an Auditable Badge to your project:
Badge Markdown:
[](https://docs.snath.ai)Lár is designed for Agentic IDEs (Cursor, Windsurf, Antigravity) and strict code generation.
We provide a 3-Step Workflow to make your IDE an expert Lár Architect.
Instead of pasting massive prompts, simply reference the master files in the lar/ directory (or download them from the main repo).
- Context (The Brain): In your IDE chat, reference
@lar/IDE_MASTER_PROMPT.md. This loads the strict typing rules and "Code-as-Graph" philosophy. - Integrations (The Hands): Reference
@lar/IDE_INTEGRATION_PROMPT.mdto generate production-ready API wrappers in seconds. - Scaffold (The Ask): Open
@lar/IDE_PROMPT_TEMPLATE.md, fill in your agent's goal, and ask the IDE to "Implement this."
Example Prompt to Cursor/Windsurf: "Using the rules in @lar/IDE_MASTER_PROMPT.md, implement the agent described in @lar/IDE_PROMPT_TEMPLATE.md."
Lár is an open-source agent framework built to be clear, debuggable, and developer-friendly. If this project helps you, consider supporting its development through GitHub Sponsors.
Become a sponsor → Sponsor on GitHub
Your support helps me continue improving the framework and building new tools for the community.
- Lár Engine by Aadithya Vishnu Sajeev — glass-box agent framework
- FAISS — vector search
- SentenceTransformers — embeddings
- Streamlit — UI
This Project is licensed under the Apache License 2.0
This means:
- You are free to use Làr in personal, academic, or commercial projects.
- You may modify and distribute the code.
- You MUST retain the
LICENSEand theNOTICEfile. - If you distribute a modified version, you must document what you changed.
Apache 2.0 protects the original author (Aadithya Vishnu Sajeev)
while encouraging broad adoption and community collaboration.
For developers building on Làr:
Please ensure that the LICENSE and NOTICE files remain intact
to preserve full legal compatibility with the Apache 2.0 terms.