Inspiration

Every message lands differently depending on who reads it. A founder's pitch that excites an early adopter may trigger skepticism in a risk-averse decision maker. A product launch that energizes power users may confuse first-time visitors. We've all experienced the moment where someone completely misreads what we wrote — and we only find out after the damage is done. We wanted to make that divergence visible before you hit send. MeaningMap answers the question: "How will your audience actually read this?"

What it does

MeaningMap is an AI-powered audience interpretation simulator. You paste a message — a startup pitch, product launch, cold email, or socialpost — and the system:

  1. Generates realistic audience personas with distinct worldviews, priorities, and audience share percentages (e.g. "Skeptical Pragmatist", "Budget-Conscious Buyer")
  2. Runs each persona as an independent interpretation agent, producing first-person reactions and scoring five signal dimensions: Clarity, Trust, Hype, Confusion, and Credibility
  3. Surfaces where your audience aligns and where it splits through a visual 2D "interpretation space" map, alignment/divergence stats, and outlier detection (any persona with trust < 30 gets flagged)
  4. Extracts misunderstanding risks ranked by severity, attributed to the specific personas that raised them
  5. Offers "Fix with AI" rewrites — one click generates a revised message that addresses a specific risk for a specific persona

The whole experience is wrapped in a cinematic loading flow with staged animations, and all past analyses are saved with vector embeddings for semantic search across your history.

How we built it

We went all-in on the DigitalOcean Gradient AI stack, using six platform capabilities end-to-end:

  • Gradient GenAI API — LLM calls for persona generation, interpretation agents, and AI-powered rewrites
  • Gradient Embeddings API — Vector embeddings for every analyzed message, enabling semantic search across history
  • Gradient ADK — Agent framework with @entrypoint decorator for local development via gradient agent run
  • Gradient Agent Platform — Deployed agent at agents.do-ai.run for programmatic invocation with tracing and logs
  • Managed PostgreSQL + pgvector — Persistent storage with HNSW-indexed vector similarity search
  • App Platform — Web app deployment with health checks and auto-deploy from GitHub

The backend is FastAPI (Python), orchestrating a pipeline where persona generation feeds into parallel interpretation agents, followed by scoring math, outlier detection, and risk extraction. The frontend is vanilla HTML/CSS/JS with a dark theme, SVG scatter plot map, and carefully choreographed CSS animations — all served from the same FastAPI service.

Challenges we ran into

  • pgvector on App Platform — We initially tried using an App Platform dev database, but pgvector requires a full Managed PostgreSQL cluster. We had to provision a standalone cluster and wire up the connection separately, including getting the CREATE EXTENSION vector call to run before registering the asyncpg codec.
  • DB initialization ordering — The pgvector extension had to be created before the asyncpg vector codec could be registered, which required careful sequencing of our database init logic.
  • UX for information density — The analysis produces a lot of data (multiple personas, five signal dimensions each, risks, map coordinates). Finding the right visual hierarchy so users aren't overwhelmed — while still surfacing outliers dramatically — took several iterations. We landed on a layered reveal: stats first, then map, then persona cards, then risks.
  • Past Analyses placement — We initially placed the history panel between the input and results, which broke the primary page flow. We had to rethink the layout so it acts as a "start here if you want to revisit" entry point above the input, collapsed by default, rather than an interruption.

Accomplishments that we're proud of

  • The interpretation space map — A full-width SVG scatter plot with labeled quadrants, cluster ellipses, animated dots, and outlier pulse rings. Clicking a dot highlights the corresponding persona card and scrolls to it. It makes abstract divergence data immediately spatial and intuitive.
  • Six DigitalOcean Gradient services in one project — AI Inference, Embeddings, ADK, Agent Platform, Managed PostgreSQL with pgvector, and App Platform, all working together in a single coherent pipeline.
  • Semantic search over past analyses — You can search "pricing concerns" and find past analyses of messages that discussed pricing, even if completely different words were used. This is powered by Gradient Embeddings + pgvector cosine similarity.
  • The cinematic loading flow — A staged 4-phase animation (persona avatars appearing, progress bar, divergence callout, staggered result reveal) that turns a wait state into an engaging experience.
  • Accessibility-first animations — All entrance animations respect prefers-reduced-motion, disabled automatically for users who need it.

What we learned

  • Vector search is surprisingly easy to add when you have pgvector and an embeddings API — the hard part is the database provisioning, not the code.
  • Interpretation divergence is more interesting than consensus — the outlier personas consistently produce the most actionable insights. Designing the UI to spotlight them (red rings, alert cards, inner monologue quotes) was the right call.
  • The Gradient ADK agent pattern (local dev with gradient agent run, deploy with gradient agent deploy) creates a clean separation between the browser-facing web app and a machine-facing agent API, which could let other agents or services invoke MeaningMap programmatically.
  • Vanilla JS still works — We shipped the entire frontend without a framework. For a single-page tool with rich animations, it kept the bundle simple and the iteration speed fast.

What's next for MeaningMap

  • Comparative analysis — Paste two versions of the same message and see a side-by-side diff of how personas react to each, highlighting which version performs better with which audience segments.
  • Audience presets — Let users define and save custom persona sets (e.g. "My investor audience", "Enterprise buyers") so they can consistently test against the same audience profile.
  • Conversation mode — Analyze entire threads or email chains, tracking how interpretation shifts across multiple messages.
  • Team sharing — Share analysis results with teammates via link, enabling collaborative message refinement before launch.
  • API integrations — Plug MeaningMap into writing tools (email clients, CMS editors, Slack) so teams can check interpretation risk inline without context-switching.

Built With

  • gradientadk
  • gradientagentplatform
  • gradientembeddings
  • gradientgenaiapi
Share this project:

Updates