GLAT

Bring unmerged teammate changes into your AI coding prompt—without waiting for the PR.

Imagine you are in a hackathon, and you want to combine your frontend with your backend. You found it extremely hard to merge since they were written by two separate AI agents. You start to wish that the two AI agents could have known what each other were doing and write compatible code right available to merge.

We know that feeling. That is why we made this tool -- GLAT.

How it works

  1. User calls GLAT with a prompt: The user types a request into the GLAT sidebar (e.g., “How does the new authentication work?”) and hits Enter.

  2. GLAT queries Moorcheh (Semantic Search): GLAT sends the user's natural language prompt to Moorcheh, which searches teammate change summaries that match the intent of the prompt.

  3. Moorcheh returns matching UIDs: Moorcheh returns an array of card_ids corresponding to the most relevant uncommitted changes.

  4. GLAT fetches the source of truth from Supabase: GLAT queries Supabase using those UIDs to retrieve the full Change Cards (AI summaries and raw code diffs). As a fallback, GLAT also checks the currently open file and queries Supabase to ensure related context is included.

  5. GLAT builds the Copilot context packet locally: GLAT uses local TypeScript to combine the user’s prompt, teammate AI summaries, raw diffs, and the current file’s code into a compiled Markdown document. It copies this to the clipboard and opens it in a new VS Code window.

  6. Copilot makes edits: The user pastes the context packet into Copilot Chat. Copilot generates new code and the user saves the file.

  7. GLAT automatically reads the new diff: The extension detects the save event, waits for 10 seconds of typing inactivity, runs git add -N . (intent-to-add) to include new files, then runs git diff to capture the exact code changes.

  8. GLAT uses Gemini to generate a summary: The captured diff is sent to gemini-2.5-flash, which returns a concise human-readable summary and predicted impacted files.

  9. GLAT stores the full state in Supabase: GLAT generates a unique ID, creates a ChangeCard object, and saves it to Supabase. After saving, it runs git add "filename" to stage the changes and reset the local diff state.

  10. GLAT syncs the semantic index to Moorcheh: GLAT attaches the same unique ID to the AI summary and metadata (author, changed files) and POSTs it to Moorcheh so future queries can find this change.


Inspiration

Modern software teams increasingly rely on AI coding assistants such as Copilot or ChatGPT. These tools can generate code quickly, suggest refactors, and automate routine tasks. However, they still operate with a major limitation: they only see your local codebase, not the work your teammates are currently doing.

This creates a collaboration gap between AI-assisted development and real team workflows.

Merge conflicts and integration problems remain a major source of friction in software teams. Studies suggest that resolving merge conflicts can consume 10–20% of developer time [1], while coordination and integration issues can cost teams dozens of hours per developer each month. Even worse, 56% of developers delay resolving merge conflicts because they are complex and time-consuming [2], and no one wants to touch them.

At the same time, 80–85% of developers now regularly use AI coding assistants [3], meaning more code than ever is being generated with AI support.

The problem is that AI tools generate code in isolation. They cannot see uncommitted changes happening across the team. As a result, AI-generated code can easily become incompatible with upcoming changes, duplicate work already done by teammates, or introduce integration conflicts.

We started GLAT with a simple idea:

What if AI coding assistants could understand what your teammates are working on before those changes are merged? How do we reduce merge conflicts from happening? A brute force idea would be to feed the LLM with all the local change logs. But this would easily use up our AI agents' context window and quotas really quickly. How do we make our app memory efficient?


What it does

GLAT bridges the gap between AI-generated code and real team context.

GLAT captures a developer’s local, uncommitted git diff and converts it into a structured unit we call a Change Card. Each Change Card contains information about who made the change, when it was broadcast, which files were modified, which files might be impacted, a summarized description of the change, and the raw diff itself.

These Change Cards are stored and indexed so they can be retrieved later. When another developer asks their AI assistant for help, GLAT retrieves the most relevant Change Cards using Moorcheh's vector database and injects them directly into the AI prompt.

From the AI’s perspective, those teammate changes are treated as if they already exist in the codebase. This allows the assistant to generate code that stays compatible with the future state of the repository, not just the current local checkout.

Instead of AI generating code in isolation, GLAT enables AI-assisted development that is aware of team context.


How we built it

GLAT is implemented as a VS Code extension that connects local development workflows with a shared semantic memory system.

When a developer edits files locally, GLAT reads the current git diff and constructs a Change Card containing the modified files, an AI-generated summary, and the raw diff. The extension stores these Change Cards remotely and indexes them in our Supabase and Moorcheh's vector database for later retrieval.

To support efficient context retrieval, we integrated Moorcheh’s semantic memory engine. Each Change Card summary is uploaded to Moorcheh, creating a searchable memory layer for team changes. When a developer prepares AI context, GLAT queries this semantic memory using the developer’s prompt to retrieve the most relevant changes. This part is crucial because:

  1. Reducing latency: It ensures we only fetch and feed relevant context into the AI model.
  2. Efficient token usage: Instead of feeding the AI the whole changelog, we only select the context that may be helpful to the user. This avoids blowing up the model quotas/credits in 10 minutes.

The system combines semantic retrieval with file-based indexing, allowing GLAT to identify both conceptually related changes and changes directly linked to the active file. These results are merged and injected into a structured prompt that is automatically opened in the VS Code chat interface.

This architecture effectively creates a pipeline:

local git diff → Change Card → semantic memory → contextual retrieval → AI prompt injection

Through this pipeline, GLAT allows AI assistants to reason about team activity and upcoming codebase changes.


Challenges we ran into

The first challenge we had to face was to learn how the VS Code extension ecosystem works from scratch. This is not a typical web app for AI applications, but an actual tool for developers to use. We all had no prior experience with writing extensions, and we were indecisive whether we should go on with the project since it may not be functional in the end. Despite making our decision quite late in the timeline, we still were able to figure things out and even integrated it with the Copilot in VS Code.

Another challenge was retrieving relevant context without overwhelming the AI prompt. If too many unrelated changes were injected, the assistant’s responses became noisy and less reliable. We addressed this by combining semantic search through Moorcheh with file-path indexing and deduplication logic, ensuring that only the most relevant Change Cards are included.

Another challenge was prompt design. When AI assistants saw diffs, they sometimes attempted to re-implement those changes themselves. To solve this, we added system instructions that explicitly tell the assistant to assume teammate changes already exist and to generate code that remains compatible with them. Or else, that would defeat the purpose of branches.


Accomplishments that we're proud of

We successfully built a working system that connects git-based development workflows, semantic memory, and AI coding assistants into a single pipeline.

GLAT demonstrates that semantic memory can be used to capture and retrieve live development context, allowing AI tools to operate with awareness of team activity. The system automatically transforms local diffs into structured knowledge that can be searched, retrieved, and injected into AI prompts.

We are particularly proud that GLAT goes beyond a conceptual prototype. It runs as a real VS Code extension, integrates with Moorcheh’s memory engine, and provides a complete workflow for broadcasting changes and preparing AI context.


What we learned

Building GLAT taught us that the biggest limitation of current AI coding tools is not their ability to generate code—it is their lack of contextual awareness.

Developers work in teams, but AI assistants currently operate as if each developer is working alone. By introducing a semantic memory layer that captures and retrieves teammate changes, we can give AI systems access to the evolving state of a project.

We also learned how important efficient memory retrieval is for AI-powered tools. Moorcheh’s semantic memory allowed us to store high-level change summaries and retrieve the most relevant ones based on developer prompts, making context-aware AI assistance practical.


What's next for GLAT

GLAT is an early step toward a broader idea: a shared memory layer for AI-assisted software development.

Future work could include better change summarization, merge conflict forecasting, and deeper integration with AI coding agents and the extensions within VS Code. We also see opportunities to expand GLAT into multi-repository environments and larger engineering teams.

Our long-term vision is for AI assistants to operate with a full understanding of a team’s evolving codebase. Instead of generating code based only on what exists locally, AI tools would be able to reason about the future state of the system.

GLAT demonstrates how semantic memory and contextual retrieval can bring us closer to that future.

References

[1] Ghiotto, G., Murta, L., Barros, M., & van der Hoek, A.
On the Nature of Merge Conflicts: A Study of 2,731 Open Source Java Projects Hosted by GitHub.
https://leomurta.github.io/papers/ghiotto2018.pdf

[2] Empirical study on developer behavior regarding merge conflict resolution.
ScienceDirect.
https://www.sciencedirect.com/science/article/abs/pii/S0164121223002315

[3] AI Coding Assistant Adoption Statistics.
GetPanto AI Blog.
https://www.getpanto.ai/blog/ai-coding-assistant-statistics

Built With

Share this project:

Updates