This document provides a high-level introduction to the AGi repository, covering its purpose, architecture, and major subsystems. AGi implements autonomous cognitive systems with conversational AI capabilities, emphasizing edge deployment and sophisticated memory architectures.
The AGi repository hosts autonomous cognitive systems designed for conversational interaction and robotic applications. The repository contains two parallel projects: AuRoRA (Autonomous Robot with Reasoning Architecture) and AIVA (AI Virtual Assistant). Both implement a Semantic Cognitive System (SCS) that provides cognitive infrastructure for the GRACE AI personality.
The systems are built on ROS2 Humble using Python and are designed to run on edge hardware, specifically the NVIDIA Jetson Orin Nano Super 8GB platform. They integrate with language models via a vLLM server and implement a biologically-inspired triune memory architecture.
Sources: README.md1-23 README.md26-36
The repository contains two distinct but architecturally similar projects:
| Project | Description | Hardware Target | Status |
|---|---|---|---|
| AuRoRA | Autonomous Robot with Reasoning Architecture | Jetson Orin Nano | Active Development |
| AIVA | AI Virtual Assistant | Server / PC | Future / Variant |
Both projects implement the same core SCS (Semantic Cognitive System) package structure. AuRoRA represents the main development focus, integrating with a Waveshare UGV Beast tracked robot, LiDAR, and depth cameras.
Diagram: Repository Structure and Project Organization
Sources: README.md51-64
GRACE (Generative Reasoning Agentic Cognitive Entity) is the AI personality system implemented by the Semantic Cognitive System. GRACE represents a complete cognitive architecture designed for conversational interaction, implementing a triune memory model (Working, Episodic, and Semantic).
The system bridges natural language interaction with specific code entities responsible for cognitive functions.
Diagram: GRACE Cognitive Architecture - Code Entity Mapping
The architecture implements memory tiers with specific logic:
wmc.py): Short-term buffer with a 1400 token budget.emc.py): Persistent storage using SQLite and embeddinggemma-300m for semantic search.mcc.py): Coordinates between tiers to build the context window for the LLM.Sources: README.md105-137 README.md57-60
The SCS package implements GRACE's cognitive infrastructure through four primary Python modules acting as ROS2 nodes or support classes.
| Module | Class | Role | Implementation Detail |
|---|---|---|---|
cnc.py | CNC | Central Neural Core | Main ROS2 node handling /cns/neural_input. |
mcc.py | MCC | Memory Coordination Core | Logic for build_context() and add_turn(). |
wmc.py | WMC | Working Memory Cortex | Deque-based FIFO storage for recent turns. |
emc.py | EMC | Episodic Memory Cortex | Async embedding worker and SQLite interface. |
Sources: README.md57-60 README.md143-179
The following diagram illustrates the transition from a user's natural language input to the code-level processing and back to the user.
Diagram: End-to-End Data Flow - CNC to Memory to LLM
Sources: README.md143-179
| Category | Technology | Purpose |
|---|---|---|
| Inference | Cosmos Reason2 2B | Vision + Reasoning brain via vLLM. |
| Middleware | ROS2 Humble | Native architecture for inter-node communication. |
| Embeddings | embeddinggemma-300m | CPU-only semantic embeddings for memory. |
| Storage | SQLite | Lightweight on-device persistent memory storage. |
| Interface | rosbridge | WebSocket bridge to the AGi.html web GUI. |
Sources: README.md39-46
The project uses structured templates for bug reports and enhancements. When reporting bugs, developers are expected to provide details on hardware (e.g., Jetson Orin Nano), the specific ROS2 package (scs), and the failing node (cnc, mcc, etc.).
Sources: .github/ISSUE_TEMPLATE/bug_report.md28-34
The repository is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). This ensures that any modifications used over a network (such as a web-interfaced robot) must be made available to the community.
Sources: LICENSE1-12 README.md8