UIGraph Onboarding Setup Guide
Audience: Engineers onboarding a new service repo into UIGraph
Time: ~15 minutes for a complex service with 30+ endpoints
Output: Fully populated UIGraph service with topology, flow diagrams, DB schemas, async flows, and MCP context ready for agents
Overview
Three phases:
- Extract — use AI prompts to generate all UIGraph artifacts from your existing codebase
- Sync — push everything to UIGraph via the CLI
- Maintain — configure your repo so AI agents auto-update diagrams on every change
Run each prompt inside Claude Code or Cursor pointed at your repo. Each prompt is self-contained — paste it into the chat, let the agent run, review the output.
Prerequisites
# Install UIGraph CLI
go install github.com/uigraph-app/uigraph-cli@latest
# Verify
export PATH="$PATH:$HOME/go/bin"
uigraph-cli help
# Create a token in the UiGraph dashboard and export it
export UIGRAPH_TOKEN="your-token-here"
mkdir -p .uigraph/diagrams .uigraph/flows .uigraph/schemas .uigraph/jobs
Phase 1 — Extract
Run prompts in order. Each builds on the previous.
Prompt 1 — Service Topology Diagram
Purpose: Maps every external connection — services, queues, databases, third-party APIs, caches, storage.
Save output to: .uigraph/diagrams/
How to run: Paste the prompt below into Claude Code or Cursor with your full repo in context.
PROMPT — copy everything below this line
You are a senior software architect documenting a backend service for a system context graph.
Analyze this entire codebase and produce a topology diagram showing how this service connects to everything outside it.
WHAT TO FIND:
- All outbound HTTP/gRPC/GraphQL calls to other internal services — check HTTP clients, service clients, SDK calls, base URLs in config/env files
- All message queue interactions — SQS, SNS, Kafka, EventBridge, RabbitMQ, Pub/Sub — check publishers, consumers, topic/queue names in config
- All databases this service owns or queries — Postgres, MySQL, DynamoDB, MongoDB, Redis, Elasticsearch — check connection strings, ORM models, repository files
- All external third-party APIs — Stripe, Twilio, SendGrid, Auth0, etc.
- All caches — Redis, Memcached, in-memory
- All storage interactions — S3, GCS, CloudFront
- Inbound callers — what triggers this service? API gateway, event trigger, cron, other services
NAMING RULES — critical for graph matching:
- Use the exact service name from go.mod, package.json, or pyproject.toml
- For queues: use the actual queue/topic name from config, not a generic label
- For databases: use the actual database name, not just "postgres" or "dynamodb"
OUTPUT — produce two files:
File 1:
topology.mmdA Mermaid flowchart LR. Keep node IDs short (api, worker, db, queue). Use edge labels to describe the relationship type.
Node shape conventions:
[Name]— rectangle for services and internal components{{name}}— hexagon for message queues and event buses[(name)]— cylinder for databases and cachesEdge label conventions:
sync HTTP— synchronous REST or GraphQL callssync gRPC— synchronous gRPCpublishes— async message publishingconsumes— async message consumptionreads/writes— database read and writereads— read-only (cache hit, replica)calls— third-party API callExample:
flowchart LR
api[HTTP API]
worker[Background Worker]
queue{{payment-events}}
db[(payments-db)]
cache[(Redis)]
orderSvc[order-service]
fraudSvc[fraud-service]
stripe[Stripe API]
api -->|publishes| queue
api -->|reads/writes| db
api -->|reads| cache
api -->|sync HTTP| fraudSvc
api -->|sync HTTP| orderSvc
worker -->|consumes| queue
worker -->|reads/writes| db
worker -->|calls| stripeFile 2:
topology.context.jsonUIGraph context overlay. Enriches each node with its type and metadata. Node type values:
cloud— AWS/GCP/Azure managed services (SQS, RDS, DynamoDB, S3)data-source— databases linked to a UIGraph DB schemacomponent— internal services with their own UIGraph service entrytext— external third-party APIsshape— generic components (workers, jobs, caches)Example:
{
"name": "payment-service topology",
"description": "Service topology for payment-service",
"nodes": {
"api": {
"type": "component",
"name": "HTTP API",
"data": {
"Owner": { "type": "Text Input", "value": "payments-team" },
"Protocol": { "type": "Text Input", "value": "REST" }
}
},
"worker": {
"type": "shape",
"shape": "rectangle",
"name": "Background Worker",
"data": {
"Trigger": { "type": "Text Input", "value": "SQS consumer" }
}
},
"queue": {
"type": "cloud",
"name": "payment-events",
"cloud": "AWS",
"service": "Amazon SQS"
},
"db": {
"type": "data-source",
"name": "payments-db",
"dbConfig": {
"service": "UIGraph Adapter",
"database": "payments",
"tableName": ""
}
},
"stripe": {
"type": "text",
"name": "Stripe API",
"data": {
"Docs": { "type": "URL Input", "value": "https://stripe.com/docs/api" }
}
}
},
"edges": {
"api-queue": { "label": "publishes" },
"api-db": { "label": "reads/writes" },
"api-fraudSvc": { "label": "sync HTTP" },
"worker-queue": { "label": "consumes" },
"worker-stripe": { "label": "calls" }
},
"groups": {
"payment-service": {
"name": "payment-service",
"nodes": ["api", "worker"]
}
}
}After the files, add three short sections:
What this service does — 2-3 sentences on business purpose and role in the system.
Key dependencies — bullet list of the most critical dependencies and why they matter.
Uncertainty — anything you could not confirm from the code, needs engineer review.
Save files:
- Write
topology.mmdto.uigraph/diagrams/topology.mmd- Write
topology.context.jsonto.uigraph/diagrams/topology.context.json
Prompt 2 — API Endpoint Flow Diagrams
Purpose: For each logical group of endpoints, produces a flowchart showing the full request flow — auth, business logic, DB operations, downstream calls, and events published.
Note: For 30+ endpoint services, run this prompt once per domain area (e.g. "focus only on the payments endpoints"). More focused runs produce more accurate diagrams.
PROMPT — copy everything below this line
You are a senior software architect documenting backend API flows for a system context graph.
Analyze this codebase and produce Mermaid flowchart diagrams for every meaningful API endpoint group.
GROUPING STRATEGY:
Group endpoints by business domain, not by HTTP method.
- Payment creation, capture, cancel → one "Payment lifecycle" diagram
- User registration, login, token refresh, logout → one "Auth flow" diagram
- Order create, update status, cancel, get → one "Order management" diagram
Do not create a separate diagram per endpoint. Group related ones. For endpoints with complex branching logic, give them their own diagram.
FOR EACH GROUP produce a Mermaid flowchart TD that shows:
- Entry point — the HTTP method and path
- Auth and validation checks — what runs before business logic
- Main business logic steps in order
- All DB operations — specify READ or WRITE and the actual table name
- All downstream service calls — specify the endpoint called if visible
- Cache reads and writes — actual cache key pattern
- All events published — queue name and event type
- Success exit — status code and key response fields
- Key error exits — what causes 400, 401, 404, 409, 500
- Any async jobs triggered
Diagram conventions:
flowchart TD— top down[text]rectangles for process steps{text}diamonds for decisions[(text)]cylinders for DB and cache operations{{text}}hexagons for queue publishes(text)rounded rectangles for start and end points- Label branches:
-->|Yes|and-->|No|Naming rules:
- DB steps:
Read DB: table_nameorWrite DB: table_name- Cache steps:
Read Cache: key_patternorWrite Cache: key_pattern- Queue steps:
Publish: EventName to queue-name- Service calls:
Call: service-name POST /pathExample:
flowchart TD
A(POST /v2/payments) --> B[Validate JWT]
B --> C{Token valid?}
C -->|No| D(401 Unauthorized)
C -->|Yes| E[Validate request body]
E --> F{Amount > 0 and currency valid?}
F -->|No| G(400 Bad request)
F -->|Yes| H[(Read Cache: idempotency:key)]
H --> I{Cache hit?}
I -->|Yes| J(200 OK cached)
I -->|No| K[Call: fraud-service POST /v1/risk-score]
K --> L{Score > 0.8?}
L -->|Yes| M(422 Blocked)
L -->|No| N[(Write DB: payments INSERT)]
N --> O[(Write DB: ledger_entries INSERT)]
O --> P{{Publish: PaymentCreated to payment-events}}
P --> Q[(Write Cache: idempotency:key)]
Q --> R(201 Created)Produce one diagram per endpoint group. After all diagrams add:
Coverage notes — any endpoints you could not fully trace, unclear dependencies, or logic that needs engineer review.
Save files: Write one
.mmdfile per endpoint group to.uigraph/flows/. Use the group name as the filename, e.g..uigraph/flows/payment-lifecycle.mmd,.uigraph/flows/auth-flow.mmdWrite coverage notes to
.uigraph/flows/api-flows-notes.md
Prompt 3 — Async Flow Diagrams
Purpose: Documents every background process — event consumers, queue workers, webhooks, scheduled jobs. These are often invisible to the API layer but contain critical business logic.
Save output to: .uigraph/flows/async-flows.md
PROMPT — copy everything below this line
You are a senior software architect documenting asynchronous flows for a system context graph.
Analyze this codebase and produce Mermaid diagrams for every async process — flows that run outside the HTTP request/response cycle.
FIND ALL OF THESE:
Message consumers — handlers reading from SQS, Kafka, EventBridge, SNS, RabbitMQ, Pub/Sub, Redis Streams. Check worker files, consumer registrations, handler mappings.
Scheduled jobs / cron — functions triggered on a schedule. Check cron expressions, EventBridge rules, Celery beat, Sidekiq, pg_cron, Lambda scheduled triggers.
Background workers — Celery tasks, Sidekiq jobs, BullMQ workers, Go goroutines started at boot.
Webhooks received — endpoints designed to receive callbacks from Stripe, GitHub, Twilio, etc. These are HTTP but represent async external triggers.
Event-driven chains — if consuming one event causes publishing another, map the full chain.
FOR EACH ASYNC PROCESS produce a Mermaid flowchart TD that shows:
- Trigger — what starts this process
- Message/event payload — key fields consumed
- Every processing step in order
- All DB reads and writes — actual table names
- All downstream service calls
- Events/messages published as output
- Retry logic — failure behavior, retry count, dead letter handling
- Idempotency — how duplicate messages are handled
- Completion — what "done" looks like
Use
flowchart TDfor most async flows. UsesequenceDiagramfor event chains that involve multiple services.Name each process:
[trigger] → [outcome]Example:PaymentCreated event → ledger reconciliationUse the same service/DB/queue names as in the topology and API flow diagrams.
Example:
flowchart TD
A([Trigger: PaymentCreated from Queue: payment-events]) --> B
B[Parse: paymentId, customerId, amount]
B --> C{Payment in DB: payments?}
C -->|No| D[Log warning: unknown paymentId]
D --> E([Dead letter: dlq-payment-events])
C -->|Yes| F[Read DB: ledger_entries]
F --> G{Entry already exists?}
G -->|Yes| H[Skip — idempotent]
H --> Z([Done — ack])
G -->|No| I[Write DB: ledger_entries]
I --> J[Write DB: account_balances]
J --> K{Success?}
K -->|No| L[Rollback]
L --> M([Retry: 3x with backoff])
K -->|Yes| N[Publish: LedgerUpdated to Queue: ledger-events]
N --> ZAfter all diagrams add:
Async flow inventory — table: Process name | Trigger | Frequency | Criticality | Retry strategy
Missing or unclear — any async processes you suspect exist but could not fully trace.
Save files: Write one file per async process to
.uigraph/flows/. Use the process name as the filename, e.g..uigraph/flows/payment-created-ledger.mmd,.uigraph/flows/order-expired-cleanup.mmdWrite the inventory table and missing/unclear notes to
.uigraph/flows/async-flows-notes.md
Prompt 4 — Background Jobs Detail
Purpose: Detailed documentation for each scheduled job — inputs, outputs, dependencies, failure behavior.
Save output to: .uigraph/jobs/jobs.md
PROMPT — copy everything below this line
You are a senior software architect documenting scheduled and batch jobs for a system context graph.
Find every scheduled job, batch processor, and recurring background task in this codebase.
For each job produce a structured entry with these fields:
Job name: exact function or class name Schedule: cron expression or frequency, e.g.
0 2 * * *= daily at 2am UTC Purpose: one sentence — what business outcome does this job achieve Trigger: cron / EventBridge rule / manual / on-demandInput:
- Data sources: which tables, queues, or external APIs it reads
- Filters: what records qualify
- Batch size: if applicable
Processing steps: numbered list of what the job does in order — be specific about table names and operations
Output:
- DB writes: which tables, what changes
- Events published: which queues, what event type
- External calls: any API calls per record
- Files produced: S3, GCS, local — if any
Error handling:
- Single record failure: skip and continue / abort / retry
- Job crash mid-run: restart behavior, idempotency
- Alerting: any monitoring on this job
Performance profile:
- Typical record count per run
- Typical duration
- Known bottlenecks
Dependencies:
- Must run after: jobs that must complete first
- Must run before: jobs that depend on this one
- Conflicts with: jobs that cannot run concurrently
After all jobs, produce a Mermaid flowchart showing job dependencies and sequence:
flowchart LR
jobA[Daily reconciliation] --> jobB[Balance snapshot]
jobB --> jobC[Report generation]Save files:
- Write each job entry to
.uigraph/jobs/jobs.md- Write the dependency diagram to
.uigraph/jobs/job-dependencies.mmd
Prompt 5 — Database Schema + Access Patterns
Purpose: Full schema extraction plus the access patterns — how the service actually queries the data.
Save output to: .uigraph/schemas/
PROMPT — copy everything below this line
You are a senior software architect documenting database schemas and access patterns for a system context graph.
Find all schemas from: ORM models (GORM, Prisma, SQLAlchemy, TypeORM), migration files, SQL DDL files, DynamoDB CDK/CloudFormation definitions, MongoDB Mongoose models.
For each database produce:
Database name and type (Postgres / DynamoDB / MongoDB / MySQL)
Purpose: one sentence — what business data lives here
For each table or collection:
- Table name and purpose
- All columns with type, nullable, default, and a short description
- All indexes — type, fields, and what query each enables
- Foreign keys with cascade behavior
Access patterns — the 5-10 most important query shapes the service runs against this table. Be specific:
- "Fetch user by email" →
SELECT * FROM users WHERE email = $1- "List payments by customer newest first" →
SELECT * FROM payments WHERE customer_id = $1 ORDER BY created_at DESC LIMIT $2For DynamoDB tables also document:
- Partition key and sort key — attribute names and types, and why this key design was chosen
- Each GSI — name, partition key, sort key, and which access pattern it enables
- Item types if single-table design — document every PK/SK pattern and which entity it represents
For MongoDB also document:
- All indexes with the query pattern each enables
- A representative document showing all fields and types
After all schemas add:
Cross-database relationships — any data duplicated or referenced across databases (e.g. userId appears in both payments-db and orders-db)
Schema health observations — missing indexes, potential N+1 patterns visible in the code, tables that need archiving
Save files: Write one file per database to
.uigraph/schemas/using the correct format for each dialect:
- Postgres / MySQL / SQLite →
.sqlfile, e.g..uigraph/schemas/payments-db.sql- DynamoDB →
.jsonfile, e.g..uigraph/schemas/payments-table.json- MongoDB →
.jsonfile, e.g..uigraph/schemas/payments-collection.jsonWrite access patterns, cross-database relationships, and schema health observations to
.uigraph/schemas/schemas-notes.md
Prompt 6 — Service Summary
Purpose: Dense natural language summary optimized for AI agents. This is what agents read to understand this service before starting any task.
Save output to: .uigraph/service-summary.md
PROMPT — copy everything below this line
You are a senior software architect writing system context documentation for an AI agent that will work on this codebase.
You have already analyzed this service. Produce a structured summary that gives an AI agent deep understanding before starting any task. This is not for humans — write it dense and precise. No filler phrases.
Service identity
- Name: exact name from go.mod / package.json / pyproject.toml
- Language/runtime: e.g. Go 1.22, Node 20, Python 3.11
- Framework: e.g. Gin, Express, FastAPI
- Purpose: 1-2 sentences on what business capability this service owns
- Criticality: tier1 = payment/auth/core data | tier2 = important features | tier3 = supporting
- Team: if visible in CODEOWNERS or README
What this service owns Bullet list of business concepts this service is the authoritative source for.
What this service does NOT own Explicit list of adjacent concepts owned by other services.
API surface
- Total endpoints and auth mechanism
- Rate limiting if visible
- Versioning strategy
- Key endpoint groups with 1-line description each
Data stores For each: name, type, what it stores, approximate scale if visible.
Upstream dependencies List: service name | what this service calls | sync or async | impact if that service is down
Downstream dependents Services that call this one, if visible from README or service discovery config.
Async flows
- Publishes: event types and queues
- Consumes: event types and queues
- Scheduled jobs: name and schedule
Critical business rules Bullet list of important logic not obvious from endpoint names. Examples:
- Payments above $10,000 require manual review before capture
- Refunds can only be issued within 90 days
- Idempotency keys expire after 24 hours
Known failure modes What breaks this service and what happens downstream. Examples:
- If fraud-service is down: payments above $500 are blocked (fail closed)
- If payments-db is down: all writes fail, reads continue from replica
Non-obvious implementation details Things that take a new engineer days to discover. Examples:
- All monetary amounts stored in cents, not dollars
- DynamoDB uses single-table design — all entity types in one table
- The
statusfield on payments has 8 valid states — see PaymentStatus enum- Redis is used for idempotency only, not as a DB read cache
Onboarding gotchas Things that trip up new engineers working on this service.
Save file: Write the complete summary to
.uigraph/service-summary.md
Phase 2 — Sync
Step 1 — Generate .uigraph.yaml
Run this prompt in Claude Code or Cursor from your repo root. It will read the docs and generate a valid config from all the artifacts you just created.
PROMPT — copy everything below this line
Read the UIGraph CLI documentation at https://docs.uigraph.app/uigraph-cli and all sub-pages to understand the exact
.uigraph.yamlformat and supported sections.Then scan this repo and generate a
.uigraph.yamlat the repo root that references all UIGraph artifacts — diagrams, flows, schemas, docs, and API specs.For the service name, description, and repository URL: use
.uigraph/service-summary.mdif it exists, otherwise infer from go.mod, package.json, pyproject.toml, or README.Save the file to
.uigraph.yaml.
Step 2 — Sync
uigraph-cli sync --dry-run # preview what will be synced
uigraph-cli sync # push everything
# Open the UiGraph dashboard in your browser to verify the service and artifacts.
Phase 3 — Maintain
Add these files to your repo so agents keep UIGraph diagrams up to date automatically when they make code changes.
CLAUDE.md
Place at repo root:
# Claude Code — UIGraph context
## UIGraph artifacts
This repo has UIGraph diagrams, flow charts, and schemas in the .uigraph/ directory.
These are the authoritative source for how this service works — read them before
exploring the codebase when starting a task.
## Keep UIGraph in sync when you make changes
When you add or modify an API endpoint:
1. Update the OpenAPI spec
2. Find the relevant flow diagram in .uigraph/flows/ and update it to reflect
any new steps, DB operations, downstream calls, or events published
3. Run: uigraph-cli sync
When you add or modify an async consumer or publisher:
1. Find the relevant diagram in .uigraph/flows/ and update it
2. Run: uigraph-cli sync
When you add or modify a scheduled job:
1. Update the relevant file in .uigraph/jobs/
2. Run: uigraph-cli sync
When you add or modify a database table or schema:
1. Update the relevant schema file in .uigraph/schemas/
2. Run: uigraph-cli sync
When you add a new service dependency, queue, or database:
1. Update the topology diagram in .uigraph/diagrams/
2. Run: uigraph-cli sync
## Sync commands
uigraph-cli sync # sync changed files
uigraph-cli sync --dry-run # preview without sending
.cursor/rules
Place at .cursor/rules:
# UIGraph rules
## UIGraph artifacts
This repo has UIGraph diagrams and schemas in .uigraph/.
Read them before exploring the codebase when starting a task.
## Keep UIGraph in sync when you make changes
Modified an API endpoint or handler:
- Find the relevant flow diagram in .uigraph/flows/ and update it
- Reflect new steps, DB operations, downstream calls, events published
- Run: uigraph-cli sync
Modified an async consumer, worker, or publisher:
- Find the relevant diagram in .uigraph/flows/ and update it
- Run: uigraph-cli sync
Modified a scheduled job:
- Update the relevant file in .uigraph/jobs/
- Run: uigraph-cli sync
Modified a database schema:
- Update the relevant schema file in .uigraph/schemas/
- Run: uigraph-cli sync
Added a new external dependency, queue, or database:
- Update the topology diagram in .uigraph/diagrams/
- Run: uigraph-cli sync
.windsurfrules
Same content as .cursor/rules above.
Checklist
- Prompt 1 — topology diagram reviewed and saved
- Prompt 2 — all API endpoint groups have flow diagrams
- Prompt 3 — all async consumers and publishers documented
- Prompt 4 — all scheduled jobs documented
- Prompt 5 — all database schemas extracted with access patterns
- Prompt 6 — service summary written and reviewed
-
.uigraph.yamlconfigured -
uigraph-cli syncrun — verified in UiGraph UI -
CLAUDE.mdadded to repo root -
.cursor/rulesadded -
.windsurfrulesadded
Tips for complex services
30+ endpoints: Run Prompt 2 once per domain area — "focus only on the payments endpoints." More focused runs produce more accurate diagrams.
Heavy async logic: Run Prompt 3 immediately after Prompt 1. Async flows are often where the most critical business logic lives and the most context is missing.
Thin passthrough microservices: Topology diagram is the most valuable artifact. Spend less time on flow diagrams.
Monorepos: Scope each prompt — "focus only on /services/payment-service."
Legacy services: Start with Prompt 6. Even partial information is useful, and the uncertainty sections tell you exactly what needs engineer review.