Revenent: The AI Symbiote

What it does

Revenent is a living corporate intelligence system designed to preserve how a company’s best engineers think, work, and solve problems, then make that knowledge available to everyone else in real time.

Instead of acting like a basic company chatbot that only searches static docs, Revenent watches engineering workflows across tools like GitHub, Slack, Jira, and VS Code, detects meaningful patterns, and builds a growing memory of strong engineering habits. When a new hire or junior engineer needs help, the system doesn’t just answer from documentation. It answers using the team’s accumulated instincts, best practices, and problem-solving patterns, personalized to that specific user.

On the front end, the user interacts with a hyperrealistic avatar of a senior engineer or leader using Tilemast AI, with ElevenLabs voice synthesis to make the experience feel like asking a real colleague for guidance rather than prompting a generic assistant.

The result is a system that feels less like a search tool and more like a living mentor that grows with the company.

The core idea

We asked a simple question:

What if a senior engineer’s intuition could be made transferable?

Not just their docs. Not just their wiki notes. Their actual instincts:

how they approach unfamiliar code

what patterns they consistently follow

what shortcuts are dangerous

what “good engineering judgment” looks like in practice

Revenent turns those instincts into organizational memory.

Every engineer contributes signal through their normal workflow. Strong patterns can be promoted into a shared company knowledge space. Weak or harmful patterns are filtered out. Over time, the system becomes a hive mind of the company’s best engineering behavior.

Key features

Real-time contextual assistant

Every event can trigger immediate contextual help. If a user opens a repo, starts a ticket, or touches a certain module, Revenent can surface relevant conventions, prior solutions, onboarding context, or role-specific guidance right away.

Habit detection and evaluation

We built a buffered habit evaluator that groups actions into meaningful windows before judging them. A single action means very little, but a sequence of actions can reveal strong or weak engineering habits.

Shared company hive mind

When the system identifies a standout technique or “personal best moment,” it can promote that idea into the shared company memory so other engineers benefit later.

Personal memory namespace

Each engineer also gets their own isolated memory space containing onboarding context, resume information, interview notes, strengths, learning patterns, and habit history.

Personalized teaching

If one engineer learned distributed systems in school and another learned through prior work experience, Revenent can explain the same problem differently for each person. The system adapts how it teaches based on the user’s background.

Semantic Git and context search

When an engineer picks up a Jira ticket, Revenent can semantically search the repo and related engineering context to find code, patterns, and prior work that are conceptually relevant, not just keyword-matched.

Smart reminder bubbles

We also prototyped a contextual reminder system that can surface relevant notes from Slack at the exact moment they matter, creating the feeling of a proactive assistant that lives alongside the engineer’s workflow.

Dashboard and admin review

The Django frontend includes habit trends, scoring, review tools, and an admin panel for human-in-the-loop oversight, manual labeling, and promoted best-practice review.

How We Used Moorcheh AI in a Uniquely Powerful Way

Moorcheh AI became the memory backbone of the entire Symbiote. We did not use it like a basic chatbot memory store or a simple vector database attached to an LLM. We used it as a living, multi-layered intelligence system with separated namespaces, cross-namespace communication, and role-aware memory promotion. That design is what made the whole project feel less like a company GPT wrapper and more like an actual organizational brain.

At the core of our architecture, Moorcheh AI stores memory in two major layers: a shared company namespace and isolated per-user namespaces. The company namespace acts as the long-term collective memory of the organization. It contains things like best practices, engineering wisdom, onboarding knowledge, architecture guidance, and promoted good habits learned from strong contributors. The per-user namespaces are completely separate and hold each engineer’s personal context, including onboarding details, learning preferences, resume-derived background, interview notes, and private habit history. This separation was critical because it let us personalize responses deeply without compromising privacy.

What makes our use of Moorcheh AI especially unique is that memory is not just stored — it evolves. As the Symbiote observes activity across tools like Slack, GitHub, Jira, and the coding environment, it analyzes behavior in batches and looks for meaningful patterns. When it detects a high-signal, high-quality engineering behavior, that habit can be promoted from an individual memory space into the shared company memory. In other words, the system is constantly turning real engineering behavior into reusable institutional intelligence. That means the memory layer grows from lived action, not just from documents people remembered to write.

We also designed the memory architecture so the namespaces are not isolated silos. They communicate through a pub/sub-style relevance bridge. When a new best practice is promoted into the company namespace, Moorcheh AI can selectively distribute that knowledge into the personal memory spaces of users for whom it is relevant based on metadata like role, technical stack, seniority, or onboarding profile. This allows every user’s memory to stay fresh and context-aware without forcing the system to rebuild everything from scratch on every query.

This gave us an unusually advanced personalization layer. A new hire is not just asking a model for help. They are asking a memory system that knows what the company has collectively learned, what senior engineers have done well in similar situations, and what teaching style is most likely to work for that specific person. If someone has a background in distributed systems, for example, the Symbiote can explain a new backend concept in terms that connect to that prior knowledge. If another user learns better through concrete examples, the system can adapt accordingly. Moorcheh AI made that possible by letting us combine shared organizational memory with deeply individualized memory in one architecture.

The end result is that our memory layer is far more complex than a normal RAG pipeline. It is multi-namespace, privacy-aware, promotable, distributable, personalized, and continuously improving. Instead of storing static facts, it stores evolving patterns of how a company thinks, works, and teaches. That is what makes the Symbiote feel alive.

How we built it

We designed Revenent as a six-layer system:

1. Ingestion layer

We used Unified (self-hosted) as the integration hub to connect external tools and route engineering activity into the platform. Events from GitHub, Slack, Jira, and VS Code are tagged per user and forwarded into our backend.

2. Processing layer

A FastAPI service acts as the brain and router. Every incoming event is split into two paths:

a fast path for immediate contextual help

a buffered path for deeper habit evaluation

3. Buffering layer

We used Redis to store short windows of activity per user. This was critical because isolated actions do not contain enough signal. Small action batches let the system reason about behavior as a pattern instead of as noise. The implementation plan centers this buffer around 10–15 actions per user.

4. Memory layer

We used Moorcheh AI as the persistent memory system. Our architecture uses multiple namespaces:

a company-wide namespace for shared wisdom, best practices, architecture knowledge, and promoted habits

a per-user namespace for private onboarding context, personal learning style, strengths, and habit history

This let us combine personalized assistance with shared organizational learning. The plan explicitly separates company_global_wisdom from user_{id}_memory to keep company memory and private memory isolated.

5. Persistence layer

We used PostgreSQL + pgvector for storing structured logs, scores, and embeddings to support search and historical analysis.

6. Frontend layer

We used Django for the dashboard and admin interface, and paired that with Tilemast AI and ElevenLabs to create the avatar-based user experience.

Architecture overview

The system runs on a dual-track model:

Track 1: Real-time assistant

Every action can immediately query both the user namespace and the company namespace to generate contextual help.

Track 2: Buffered habit evaluation

Actions are accumulated in Redis and then evaluated in batches. This gives the system enough context to determine whether the engineer is demonstrating strong habits, weak habits, or standout behavior worth promoting.

If a strong, reusable pattern is found, the Promotion Engine checks whether that idea already exists in the shared company memory. If it is genuinely novel or better, it gets promoted into the company namespace for everyone else to benefit from later. The plan describes this promotion flow explicitly as the self-improving core of the system.

What makes this different

Most internal AI tools are just document search with a nicer UI.

Revenent is different because it learns from behavior, not just documentation.

It does not only answer:

“What does the wiki say?”

It can also answer:

“How would the strongest engineer on this team probably approach this?”

“What patterns have worked before in similar contexts?”

“How should this be explained to this specific user based on how they learn?”

That is the difference between a company GPT wrapper and a living corporate symbiote.

Challenges we ran into

Batching signal correctly

One of the biggest challenges was realizing that single actions are almost meaningless. We needed to evaluate engineering behavior as sequences, not isolated clicks. That pushed us toward the Redis buffering model.

Namespace isolation

We wanted personalization without sacrificing privacy. That meant building strict separation between per-user memory and company-wide memory, while still allowing useful promotion of good habits.

Cross-namespace relevance

It was technically difficult to make shared wisdom useful without spamming everyone with irrelevant updates. We had to think carefully about when a promoted practice should remain global and when it should also be pushed into individual user memory.

Trust and privacy

A system that observes engineering workflows can easily feel invasive. We had to design around opt-in data use, private user memory, admin review, and transparent controls from the beginning. The implementation plan explicitly limits Slack capture to opt-in public channels, excludes DMs, and focuses on patterns and metadata instead of private content.

UI reliability

Our “thought bubble” reminder prototype worked, but it is fragile in its current form. It proved the UX concept, but it should become a true browser extension in the next version.

What we learned

1. Batching is everything

The Redis buffer ended up being one of the most important architectural decisions. Good AI judgment needs context.

2. Trust is a product feature

Privacy is not just a compliance box. It is necessary for adoption. Engineers will not use a system like this unless personal memory is clearly isolated and reviewable.

3. Personalization matters more than raw retrieval

The ability to explain something in a way that matches a user’s background makes the system far more helpful than a generic answer engine.

4. Human-in-the-loop review matters

Automatic labeling is powerful, but it becomes much more trustworthy when admins can audit, review, and refine promotions and labels.

5. The interface changes engagement

Using a realistic avatar and voice experience changed how people interacted with the system. It felt more like mentorship than search.

Accomplishments we’re proud of

We are proud that we built more than a chatbot.

We built:

  • a dual-track intelligence pipeline

  • a multi-namespace memory system

  • a promotion engine for organizational learning

  • a personalized onboarding-based teaching layer

  • semantic repo/task search

  • a proactive contextual reminder concept

  • a full-stack dashboard and review flow

  • a believable human-style interaction layer with voice and avatars

Most importantly, we proved the core thesis: engineering intuition can be captured, filtered, and redistributed as a living company asset.

What’s next

Better human review

We want stronger human-in-the-loop controls, confidence thresholds, and review queues before automatically promoting habits.

Richer onboarding context

Right now we use resume and interview information. Next, we want to support deeper onboarding context and consent-based profile enrichment.

Better repo intelligence

We want semantic search to expand beyond code into related PRs, Slack context, and prior implementation history.

Team-level analytics

Right now the experience is mainly user-level. We want to add views that show what habits are spreading across teams, where knowledge gaps exist, and what best practices are actually being adopted.

Stronger productization

The reminder-bubble feature should become a proper extension, and the overall system should become easier to deploy as reusable infrastructure.

Final thoughts

Revenent started with a wild idea: what if a company could preserve the instincts of its strongest engineers and make them available to everyone else?

What we built is an early version of that future.

It is part mentor, part memory system, part engineering coach, and part organizational brain. It grows as the company grows. It learns what good looks like. It helps new hires ramp faster. It preserves tribal knowledge before it disappears. And instead of letting great engineering judgment leave when someone leaves the company, it turns that judgment into something the whole organization can keep learning from.

That is Revenent.

Built With

Share this project:

Updates