Inspiration

We started with a deceptively hard question: how do you make AI feel more human? My partner and I brainstormed for hours and kept circling the usual answers: better prompts, better personalities, faster models. None of it captured the feeling of real collaboration.

Then we noticed something: the most human part of our process wasn’t the conversation, it was the shared work. While we were thinking, we were sketching, rearranging, crossing things out, labeling, and reacting to changes together in a single workspace. That’s when it clicked: AI feels human when it shares the same page with you, not when it talks at you from a chat box.

Looseleaf came from that realization.

What it does

Looseleaf is an AI-powered collaborative whiteboard where you can draw, type, and build ideas on a single canvas with an AI partner (Pythagoras).

Instead of producing a final answer in a chat bubble, Pythagoras collaborates by creating and evolving visible artifacts:

  • Draws on the canvas (strokes, connectors/arrows, highlights)
  • Adds structured text blocks (headings, bullets, notes)
  • Suggests “AI Insights” that you can accept or reject so the board doesn’t get spammed
  • Interprets what you add by tracking board state and sending visual snapshots so it understands the whole context, not just the last stroke

The result is a shared thinking surface where your work stays visible, editable, and spatial.

How we built it

  • Frontend: Next.js canvas app with a minimal Looseleaf-style UI (tools, lasso selection, resize handles, zoom, paste images).
  • Realtime transport: A Fastify backend with a WebSocket endpoint for low-latency collaboration. The client streams board deltas and context.
  • State + memory: The system tracks board objects (strokes, text blocks, connectors, notes, images), plus “what changed since the last turn,” so AI responses are grounded in current state.
  • Hybrid multimodal context: The AI receives structured board metadata and snapshots (full board + changed region) to interpret drawings, handwriting, and diagrams holistically.
  • Action protocol: The model outputs board operations (micro-ops) instead of essays. The UI applies these ops to render drawings and edits incrementally.
  • Persistence: Supabase stores canvas metadata and board JSON so users can reload and continue later.

Challenges we ran into

  • Latency vs. intelligence: We wanted fast feedback without losing understanding. Sending the whole board every time was too heavy, but sending only the last stroke was too dumb.
  • “Last stroke” failure mode: Early versions literally described the most recent stroke instead of understanding the full sketch. Fixing this required better state tracking and snapshot strategy.
  • Schema brittleness: LLM-driven UI is powerful but unforgiving. Slightly invalid structured outputs can break the pipeline, so we had to tighten schemas and validation.
  • Layout and overlap: AI can draw the right content in the wrong place. Keeping diagrams readable required collision-aware placement and smarter anchoring.
  • Proactiveness without annoyance: A human collaborator doesn’t fill your page with noise. We had to dial down “coach” behavior and add an insights workflow that’s helpful but not intrusive.

Accomplishments that we're proud of

  • Built a working MVP where AI collaborates by editing the canvas, not just chatting.
  • Got multimodal understanding to a usable baseline: the system can interpret what’s on the board using state + images, not one-off guesses.
  • Implemented an accept/reject pattern for AI suggestions so users stay in control.
  • Delivered a clean, minimal UI inspired by the Looseleaf aesthetic with real canvas tooling (lasso, resize, zoom, paste images).

What we learned

We learned that “human-like” AI is less about sounding human and more about interaction design:

  • Shared context beats chat scrolls
  • Visible work beats invisible reasoning
  • Incremental edits beat one-shot answers
  • Collaboration requires restraint, confirmation, and space for the user’s intent

In other words: making AI feel human is an interface problem as much as it is a model problem.

What's next for Looseleaf

  • Smarter layout engine for AI-generated diagrams (clean spacing, less overlap, more consistent structure).
  • Stronger tutoring mode for math and technical diagrams: step-by-step annotations directly on the user’s work (not separate explanations).
  • More reliable multi-focus understanding when multiple equations/diagrams exist on one canvas (better “which one are we talking about?” grounding).
  • True multiplayer collaboration (shared rooms, presence, cursors) with production-grade realtime infrastructure.
  • Deployment hardening and scaling: separate persistent WebSocket hosting for the backend, with Vercel for the frontend.

Built With

Share this project:

Updates