Skip to content

zerocool26/RelayOps

Repository files navigation

RelayOps AI

RelayOps AI is an offline-first, AI-assisted field operations copilot built for the PowerSync AI Hackathon.

It is designed to make one idea instantly obvious to judges:

when connectivity is poor, the app should still behave like a first-class operational tool — and PowerSync should be the reason why.

What RelayOps demonstrates

RelayOps focuses on inspections, incident reporting, triage, and supervisor oversight for remote and low-connectivity operations.

The current MVP proves a complete local-first workflow:

  1. a technician creates a high-severity incident locally
  2. local state updates immediately through PowerSync-managed SQLite
  3. the incident appears in a visible pending sync queue
  4. reconnecting promotes it into the synced supervisor approval queue
  5. a supervisor approves or rejects the recommendation
  6. the audit timeline shows the full chain across human action, AI action, and synchronization

Why this matters

Most field software still assumes the network is reliable. That assumption breaks in refineries, utilities yards, remote service environments, logistics hubs, and industrial sites.

When connectivity degrades, teams need software that can still:

  • read assigned work locally
  • capture notes and evidence locally
  • draft structured incident data locally
  • preserve pending writes durably
  • synchronize later without losing trust or traceability

RelayOps is built around that exact problem.

Why local-first matters here

Local-first is not a performance trick in this project. It is the product model.

Without local-first

  • incident capture blocks on the network
  • AI assistance becomes fragile
  • technicians wait instead of working
  • supervisors get inconsistent handoffs
  • audit trails lose the sequence of events

With local-first

  • every critical action lands in local state first
  • the UI stays fast because it queries local SQLite directly
  • offline actions become visible queue entries instead of hidden ephemeral state
  • sync becomes a meaningful promotion step, not an invisible implementation detail
  • AI and human actions end up in the same shared, auditable record

Why PowerSync is essential

PowerSync is the backbone of RelayOps.

It is used to:

  • manage the local embedded SQLite database that powers the UI
  • support instant local reads and writes in the technician workflow
  • make queued local actions visible as first-class product state
  • model the handoff from local-only draft to synced supervisor data
  • align human action, AI action, and sync state inside one auditable system

This repo intentionally makes those transitions visible in the product surface through:

  • global sync pills
  • install/offline-shell pills and a field-app install button in the header
  • record-level sync badges
  • a pending actions queue
  • an approval queue that only shows synced incidents
  • an audit timeline with offline and sync events

Sync Streams usage

The production-oriented stream design lives in:

  • powersync/sync-streams.yaml

RelayOps uses Sync Streams in four main modes:

1. Auto-subscribed reference data

  • organizations
  • sites
  • playbooks

These should always be offline-available.

2. Auto-subscribed technician workspace

  • assigned work orders
  • inspections
  • relevant assets
  • user/supervisor context

This keeps the field workspace hot locally.

3. Auto-subscribed incident collaboration state

  • incidents
  • notes
  • action comments with mentions
  • photo evidence
  • voice memos
  • transcripts
  • AI runs
  • AI recommendations
  • audit events
  • sync events

This powers the activity feed, timeline, and technician incident workspace.

4. Auto-subscribed supervisor queue plus on-demand detail

  • supervisor approval queue is subscribed automatically
  • a specific incident detail stream is subscribed on demand by incident_id

That keeps the sync footprint disciplined while still supporting deep review workflows.

How PowerSync connects the frontend, backend, and agents

Frontend

The UI writes incident bundles into the local PowerSync database first. All major surfaces query local SQL state through watched queries.

Backend

The intended production path is:

  • PowerSync connector uploads queued writes
  • Supabase Postgres becomes the backend source of truth
  • PowerSync Sync Streams distribute the relevant subsets back to clients

Agents

Mastra is used for the triage workflow path.

  • offline mode uses a deterministic local draft path
  • sync/reconnect replays the incident through a Mastra-backed triage route
  • AI output is written back into the same operational record as human actions

That means AI is not bolted on as a chat gimmick — it participates in the same synchronized state graph.

Tech stack

  • Frontend: TanStack Start, React, TypeScript, Tailwind
  • Local-first data: PowerSync Web SDK + local SQLite (wa-sqlite)
  • Offline shell: PWA manifest + service worker + installable field app shell
  • AI orchestration: Mastra workflow route
  • Backend path: Supabase client scaffolding + PowerSync Sync Streams design
  • Voice path: Cactus-ready transcription abstraction with safe demo fallback

Current MVP features

Technician flow

  • seeded critical pressure-alarm scenario
  • editable inspection checklist with local-first progress updates
  • local incident creation with notes + transcript draft
  • local photo evidence capture/upload with offline-ready preview cards
  • local voice memo recording or audio upload with offline playback in the case file
  • visible offline/online simulation toggle
  • installable field-app shell with cached offline fallback
  • visible pending sync queue
  • local AI triage draft

Supervisor flow

  • shared operational scope bar that filters the control center by site, status, severity, worker, sync state, and approval posture while keeping global queue truth visible
  • operations risk map derived locally from site coordinates, incident posture, backlog, and sync state
  • SLA watchlist derived from local incident timestamps and task deadlines
  • evidence readiness board derived locally from notes, photos, audio, transcripts, and recommendations
  • asset escalation forecast derived locally from incidents, tasks, sync blockers, evidence gaps, and playbook coverage
  • shift handoff command board derived locally from unresolved mentions, SLA pressure, evidence gaps, and site posture
  • approval decision impact preview that shows the likely consequence of approval vs rejection before the supervisor commits
  • operational clearance board derived locally from approval, evidence, checklist, conflict, task, and handoff posture before anyone restarts or releases work
  • AI governance simulator that shows which incidents could auto-route, still need human review, or must stay hard-held under configurable local policy rules
  • response capacity board derived locally from site workload, clearance blockers, handoff pressure, and governance queue relief to show which site can absorb another urgent incident or needs mutual aid
  • surge dispatch simulator that models a fresh urgent incident, donor-site backup, and governance queue relief before supervisors move people across sites
  • recovery timeline simulator that projects support ETA, approval drag, field containment, and full stabilization window from the current dispatch plan
  • comparative recovery planner that ranks alternate donor/support strategies and lets supervisors load the best-balance plan directly into the simulator
  • second-hit resilience planner that shows which site becomes the next weak point if another incident lands before the current plan fully stabilizes the network
  • network hardening planner that ranks the safest recovery move when supervisors expect another hit before the first response fully cools down
  • command sequence planner that turns the safest move, queue relief, and weak-point protection into an immediate step-by-step supervisor playbook
  • approval queue optimizer that ranks which queued incident Jonas should approve or reject next to support the active network plan without destabilizing the weak point
  • support release planner that ranks whether the donor site should hold support, stage to standby, or stand down now so backup is not pulled too early
  • watch reset planner that phases regional elevated watch, targeted weak-point watch, and return to normal rotation after the donor stands down
  • stability tripwire planner that tells supervisors what should reopen elevated watch, recommit the donor, or delay normal rotation if the corridor starts wobbling again
  • rollback response planner that turns each rollback tripwire into the exact contingency sequence supervisors should run if the corridor actually reheats
  • rollback re-entry planner that tells supervisors when rollback has cleared enough to retry donor release, targeted watch, or full normal rotation
  • rollback branch planner that tells supervisors whether to retry the same lane, re-enter on a harder lane, or localize recovery to protect the donor after rollback
  • rollback retirement planner that tells supervisors when the current post-rollback lane or donor is exhausted and should be formally retired in favor of a steadier branch
  • rollback rebuild planner that tells supervisors what has to rebuild after retirement and when the retired lane or donor can return in shadow, standby, or full service again
  • rollback certification ledger that tells supervisors what proof is still missing before a retired lane or donor earns shadow, standby, or full live certification again
  • rollback trust budget planner that tells supervisors how much shadow, live load, and surge exposure a recovered lane or donor can safely take right now, plus the certification debt still being carried
  • rollback load-ramp planner that tells supervisors when the recovered lane or donor can widen from guarded return to metered live, surge-ready, and full-open duty again, how many clean cycles remain, and how fast certification debt is burning down
  • rollback ramp guard that tells supervisors whether the current widening plan is being overspent, how much safe live / surge headroom remains, and whether RelayOps should hold, tighten, or freeze before another rollback starts
  • rollback ramp relief planner that tells supervisors which move buys back safe corridor headroom fastest — hold current line, tighten the safe cap, freeze widening, or load a relief branch — plus the reserve gain and tradeoff of each move
  • rollback ramp policy selector that tells supervisors which relief move wins under corridor-first, donor-first, output-first, or low-churn doctrine so the chosen rollback posture matches the actual operating priority instead of tribal instinct
  • rollback ramp doctrine trigger planner that tells supervisors when the active doctrine should hold, arm a flip, or switch from low-churn / output-first into donor-first or corridor-first before the wrong priority lingers too long
  • rollback ramp doctrine hysteresis planner that tells supervisors how long to keep a doctrine latched, when to buffer a stricter flip, and when a lighter doctrine can safely come back without thrashing on noisy corridor reviews
  • rollback ramp doctrine review cadence planner that tells supervisors how often to recheck the current doctrine, when doctrine confidence is decaying, and when RelayOps should force a fresh review before stale proof drives the next rollback move
  • rollback ramp doctrine proof gate planner that tells supervisors whether the current doctrine is trustworthy enough right now, which proof gaps or stale packets still block relief, and which doctrine is truly trusted now versus only safe under guard
  • rollback ramp doctrine arbitration planner that tells supervisors which doctrine should actually govern once raw doctrine score, latch discipline, freshness, and proof are combined, and whether RelayOps should govern now, govern under guard, hold current, revalidate first, or block the move outright
  • rollback ramp doctrine succession planner that tells supervisors which doctrine should govern now, which stricter doctrine should take over if pressure worsens, which lighter doctrine can return if calm survives, and which repair doctrine should anchor revalidation if proof or freshness weakens
  • mention-aware action comment composer plus open mention inbox for structured handoffs
  • approval queue that lights up after sync
  • incident case file with a routed decision brief, packet/outlook summary, photo evidence, transcript, extracted entities, and recommendation rationale
  • sync-aware collaboration notes and generated status updates
  • approve/reject actions
  • follow-up task generation

Governance flow

  • audit timeline spanning human, AI, and sync events
  • status badges for sync state, severity, and approval state
  • control-room hackathon readiness cockpit for submission proof, live telemetry posture, and final artifact checks
  • guided demo rail that makes the offline → sync → approval progression obvious
  • shared workflow baton that keeps the current offline → sync → approval stage visible in the utility rail, command palette, and action inbox
  • live demo proofboard in the utility rail that turns the current workflow stage into judge-friendly receipts, narrator prompts, and one-click jumps
  • always-visible attention router in the utility rail so the strongest next routes are accessible without opening overlays first
  • Alt+1 / Alt+2 / Alt+3 scoped fast routes so operators can execute the top attention actions from anywhere without mouse travel
  • command-palette recommended actions that surface matching Alt+1 / Alt+2 / Alt+3 badges for overlay parity during live routing
  • in-palette Alt+1 / Alt+2 / Alt+3 execution so those top scoped routes still fire even while the command palette search input has focus
  • grouped action inbox with keyboard navigation so alerts route directly into the live workspace, case file, and blocker surfaces
  • collaboration notes that enter the same local queue and sync lifecycle as other operational changes
  • response-tracked action comments that can mention supervisors or the next shift and stay visible until resolved
  • conflict watchlist that compares local checklist state against a remote supervisor restart hold and offers local / remote / merge resolution

Hybrid retrieval flow

  • structured operational facts from incidents, work orders, and tasks
  • semantic SOP augmentation from playbooks
  • similar past incidents ranked from synchronized local history with carryover response steps

Repo structure

  • src/components/relayops/ — product UI, provider, live-query hooks
  • src/lib/relayops/ — PowerSync schema, seed data, repository, SQL queries, AI helpers
  • src/mastra/ — workflow registration and triage workflow
  • src/routes/api/triage.ts — server route for synced triage execution
  • powersync/sync-streams.yaml — stream topology for the production sync model
  • supabase/migrations/ — SQL schema for the hosted Supabase path
  • supabase/seed.sql — SQL seed set for the hosted demo path
  • sample-data/ — portable demo scenarios and seed snapshots
  • SCHEMA.md — table-by-table schema notes for judges and reviewers
  • SETUP.md — quick bootstrap plus optional live telemetry setup
  • DEMO_SCRIPT.md — 2–3 minute judge-optimized walkthrough
  • ARCHITECTURE.md — detailed architecture walkthrough

The Control Room workspace also includes a Hackathon readiness cockpit so setup state, telemetry posture, and remaining submission tasks are visible without leaving the product shell. It now includes an advanced Judge launch sequence plus a Project goals verification ledger, so every major claim can be tied to a live proof surface, a verification method, and a repo source during the demo itself. It also now includes an interactive Judge console powered by the live proofboard state, so the presenter can select a proof beat, inspect its current checks, and jump directly into the exact shell surface that proves it. The guided demo walkthrough has also been upgraded into a proper presenter mode overlay with keyboard navigation, live-proof status notes, and direct “show in shell” routing. It can also export a downloadable proof bundle, a markdown submit kit, a demo talk track, a Typeform-ready answer pack, an exportable judge brief, a final-hour brief, a dedicated judge launch script, and a screenshot capture map so final packaging stays as operational as the product itself. The command palette and Control Room action grid can now jump directly into that cockpit, open the judge walkthrough, and route to the current proof beat, which makes last-mile demo and submission prep much faster under deadline pressure.

Copilot accelerators

This repository now includes a project-specific Copilot customization layer in .github/ so recurring RelayOps work can be faster and more consistent.

Always-on guidance

  • .github/copilot-instructions.md — repo-wide priorities for offline-first design, operator UX, and validation
  • .github/instructions/relayops-ui.instructions.md — UI-specific guidance for command-center surfaces
  • .github/instructions/relayops-data.instructions.md — PowerSync/data-model guidance for schema, queries, and sync workflows
  • .github/instructions/relayops-docs.instructions.md — judge/demo-oriented markdown guidance

Slash commands

These appear after typing / in GitHub Copilot Chat:

  • /unleash-relayops-copilot — run a research-backed, highest-leverage autonomous RelayOps engineering pass that uses the repo’s docs, instructions, prompts, skills, agents, and validation workflow together
  • /relayops-quality-pass — pick and ship the next highest-leverage improvement across the product
  • /relayops-offline-sync — debug offline-first, queue, sync promotion, approval, or conflict issues end to end
  • /align-relayops-workflow-focus — align the shell so workflow navigator, notifications, command palette, walkthrough, and drawer land on the exact live surface
  • /improve-relayops-command-center — run a focused UI/UX improvement pass on the command-center shell
  • /debug-relayops-sync — investigate a specific sync bug from mutation path to UI
  • /polish-relayops-demo — strengthen the hackathon demo story, docs, or visible product flow

Custom agent

  • relayops-command-center — a specialized agent for command-center UX, orchestration, accessibility, and demo-polish work in the React frontend

Prompt files are enabled in .vscode/settings.json and prompt-file recommendations are on, so these workflows should also surface more readily when starting a new chat in VS Code.

How to run

Local demo

npm install
npm run dev

Open the app, leave it in offline mode, create the seeded P-204 incident, then reconnect and sync.

If the browser offers it, click Install field app in the header to run RelayOps as a standalone field-device app with a cached offline shell.

Production build verification

npm run build

This build was verified successfully in this session.

Environment setup

Placeholders are included in:

  • .env
  • .env.example

The current demo works without live backend credentials because the sync queue and triage flow are demonstrated through the local-first harness.

If you want the header to surface real PowerSync Sync Streams telemetry, set:

  • VITE_POWERSYNC_SYNC_MODE=telemetry
  • VITE_POWERSYNC_URL to a real PowerSync instance
  • VITE_POWERSYNC_TOKEN to a real token or JWT

In that mode, RelayOps keeps the stable local demo queue for write promotion while also showing live PowerSync connection and stream status.

For a full hosted path, wire in:

  • Supabase URL + anon key
  • PowerSync instance URL + token
  • model provider key
  • optional Cactus key

Best demo path for judges

  1. Show the header in offline mode.
  2. Open the seeded P-204 job and complete a checklist item locally.
  3. Capture one or two field photos plus a short voice memo, then create the seeded compressor incident locally.
  4. Point out the Install field app action or Offline shell cached pill in the header, then call out the pending actions queue, guided demo rail, local-only draft badges, and the evidence bundle stored locally in the case file.
  5. Point out the workflow baton, demo proofboard, attention router, and action inbox so judges can see the same workflow stage, receipts, and next-best routes persist even after you open overlays.
  6. Open the audit timeline and show that the incident was captured offline.
  7. Optional stretch: finish the remaining checklist items offline, reconnect, and show the conflict watchlist surfacing a restart-hold disagreement.
  8. Reconnect and trigger sync.
  9. Use the supervisor filters to focus the synced queue.
  10. Point at the Operations risk map and show that site posture is derived locally from replicated coordinates plus live incident state.
  11. Open the Evidence readiness board and show which incidents still need proof before review.
  12. Open the Asset escalation forecast and show which asset needs proactive attention next.
  13. Open the Shift handoff command board and show which site needs the strongest next-shift briefing from local synced state.
  14. Open the Decision impact preview and show what approval vs rejection will do before Jonas commits.
  15. Open the Operational clearance board and show whether RelayOps would actually let the team proceed or keep the incident on hold.
  16. Open the AI governance simulator and show how changing the policy shifts incidents between auto-route, human review, and hard hold.
  17. Open the Response capacity board and show which site can still absorb another urgent incident plus where RelayOps would send backup right now.
  18. Open the Surge dispatch simulator and show whether a fresh critical incident plus backup from another site actually stabilizes the network or just shifts the risk.
  19. Open the Recovery timeline simulator and show how long the network would take to stabilize, which phase is slowest, and whether the current support ETA is actually worth the move.
  20. Open the Recovery plan comparison board and show which donor/support package is fastest versus which one is the best balance for the whole network.
  21. Open the Second-hit resilience planner and show which site becomes the next weak point if another incident lands before the first response fully cools down.
  22. Open the Network hardening planner and show which recovery move RelayOps recommends when the team wants the safest posture instead of the fastest ETA.
  23. Open the Command sequence planner and show how RelayOps turns the safest move into a concrete first-move / next-move command list.
  24. Open the Approval queue optimizer and show which queued incident Jonas should approve or reject next while the hardening plan is still active.
  25. Open the Support release planner and show when East Compressor Yard can safely stand down instead of guessing when to pull backup.
  26. Open the Watch reset planner and show when the network can narrow from elevated watch to targeted watch, then fully return to normal rotation.
  27. Open the Stability tripwire planner and show what exact signals would force RelayOps to reopen elevated watch, pull East back in, or delay the return to normal.
  28. Open the Rollback response planner and show the exact fallback sequence RelayOps would run if one of those rollback tripwires actually fires.
  29. Open the Rollback re-entry planner and show when RelayOps would safely retry donor release or normal rotation after the rollback sequence lands.
  30. Open the Rollback branch planner and show whether RelayOps should retry the same lane, a safer lane, or a donor-protective local lane once rollback finally clears.
  31. Open the Rollback retirement planner and show when RelayOps should stop retrying the old lane entirely or retire East as donor instead of debating another shaky retry.
  32. Open the Rollback rebuild planner and show what has to rebuild after retirement before the old lane or donor can return in shadow, standby, or full service again.
  33. Open the Rollback certification ledger and show what proof is still missing before East or the retired lane earns shadow, standby, or full live certification again.
  34. Open the Rollback trust budget planner and show how much live load, shadow duty, and surge exposure the recovered lane or donor can safely take now plus the certification debt still being carried.
  35. Open the Rollback load-ramp planner and show when the recovered lane or donor can widen from guarded return to metered live, surge-ready, and full-open duty again, plus how many clean cycles and how much debt burn-down remain.
  36. Open the Rollback ramp guard and show whether the current widening plan is still inside the safe corridor or already needs a hold, tighten, or freeze call before another rollback starts.
  37. Open the Rollback ramp relief planner and show which move buys back corridor headroom fastest, whether RelayOps should tighten caps or change branches, and how much reserve that move recovers before another rollback starts.
  38. Open the Rollback ramp policy selector and show how the recommended move changes when the supervisor prioritizes raw corridor safety, donor protection, live output preservation, or the lowest coordination churn.
  39. Open the Rollback ramp doctrine trigger planner and show when RelayOps would stop being output-first or low-churn, arm a donor-first flip, or immediately hand control back to corridor-first before the wrong doctrine lingers.
  40. Open the Rollback ramp doctrine hysteresis planner and show how RelayOps buffers doctrine flips, keeps a stricter doctrine latched after the handoff, and only lets output-first or low-churn come back after calm survives the release window.
  41. Open the Rollback ramp doctrine review cadence planner and show how RelayOps decides when the current doctrine is still fresh, when confidence is decaying, and when the supervisor should revalidate doctrine now instead of running on stale proof.
  42. Open the Rollback ramp doctrine proof gate planner and show whether corridor-first is actually trusted now, what proof debt or packet gaps still block relief, and which doctrine RelayOps would trust under guard versus block outright.
  43. Open the Rollback ramp doctrine arbitration planner and show which doctrine should actually govern once raw score, latch discipline, freshness, and proof are combined — plus whether RelayOps should govern now, hold current, revalidate first, or block the move.
  44. Open the Rollback ramp doctrine succession planner and show which doctrine governs now, which stricter doctrine owns the hotter lane if East reheats, which lighter doctrine can return only after calm survives, and which repair doctrine anchors revalidation if proof or freshness weakens.
  45. Open the Open mention inbox or Action comment thread and show which handoffs are still waiting on Jonas or the next shift.
  46. Approve the recommendation and show generated tasks.
  47. Close on why this only works well because PowerSync makes local state trustworthy.

Optional stretch: open the case file and point at Similar past incidents to show RelayOps retrieving the closest prior pressure anomaly from local synced history before the supervisor decides what to do next.

Also call out the SLA watchlist to show that supervisors can see which responses are about to breach local response targets without waiting for a server-side report.

If you want one extra punchy beat, open the case file and point at the Evidence readiness summary card so judges can see that handoff completeness is computed directly from synchronized local records instead of a separate reporting backend.

And if you want a second punchy beat, point at the Operational outlook card in the case file to show that RelayOps can predict the next likely escalation from local synchronized state before another incident fires.

For one more decision-support beat, point at the Operational clearance card in the case file and show that RelayOps can say "hold" or "proceed with guardrails" from the same local state graph that drives approvals and handoffs.

For one more governance beat, point at the AI governance outcome card in the case file and show that RelayOps can simulate the autonomy boundary before the team trusts AI to route work without a human click.

For one more staffing beat, point at the Site response capacity card in the case file and show that RelayOps can say which site has reserve, which one needs relief, and where mutual aid should go from the same synchronized local state graph.

For one more dispatch beat, point at the Current surge projection card in the case file and show that RelayOps can test “what if another critical incident lands right now?” before the supervisor pulls a person off another site.

For one more recovery beat, point at the Current stabilization window card in the case file and show that RelayOps can estimate support ETA, slowest recovery phase, and the real time-to-stable from the same synchronized local state graph.

For one more planning beat, point at the Recommended recovery plan card in the case file and show that RelayOps can rank alternate donor/support strategies before the supervisor commits the move.

For one more resilience beat, point at the Next weak point after current plan card in the case file and show that RelayOps can predict where the network breaks first if a second hit lands too soon.

For one more hardening beat, point at the Recommended hardening move card in the case file and show that RelayOps can recommend the safest donor/support strategy before the next hit lands.

For one more execution beat, point at the Immediate command sequence card in the case file and show that RelayOps can turn the safest plan into the exact next steps the supervisor should run first.

For one more queue-governance beat, point at the Recommended approval queue action card in the case file and show that RelayOps can say whether Jonas should approve or reject the selected queued incident to support the active plan.

For one more stand-down beat, point at the Recommended support release window card in the case file and show that RelayOps can say when East should hold support, move to standby, or stand down entirely.

For one more watch-reset beat, point at the Network watch reset outlook card in the case file and show that RelayOps can say when the whole corridor can leave elevated watch and return to normal rotation.

For one more rollback beat, point at the Top rollback tripwire card in the case file and show that RelayOps can say exactly what signal would force the team to reopen elevated watch or pull East back into the corridor.

For one more rollback-response beat, point at the Recommended rollback playbook card in the case file and show that RelayOps can turn a rollback trigger into the exact first move, follow-up sequence, and expected result before anyone improvises under pressure.

For one more rollback-recovery beat, point at the Rollback re-entry outlook card in the case file and show that RelayOps can say when the corridor is finally stable enough to retry donor release or return to normal after rollback.

For one more rollback-branch beat, point at the Recommended rollback branch card in the case file and show that RelayOps can say whether the corridor should retry the same lane, load a harder retry branch, or keep the donor off the hot path entirely after rollback.

For one more rollback-retirement beat, point at the Recommended rollback retirement call card in the case file and show that RelayOps can say when the corridor should stop retrying the old lane entirely or retire East as donor after repeated rollback pressure.

For one more rollback-rebuild beat, point at the Recommended rollback rebuild path card in the case file and show that RelayOps can say what has to rebuild after retirement and when the old lane or donor is finally safe to re-qualify in shadow, standby, or live service.

For one more rollback-certification beat, point at the Recommended rollback certification call card in the case file and show that RelayOps can say exactly what proof is still missing before the old lane or donor earns shadow, standby, or full live certification again.

For one more rollback-budget beat, point at the Recommended rollback trust budget card in the case file and show that RelayOps can say exactly how much live load, shadow duty, and surge exposure the recovered lane or donor can safely take now instead of spending all of that newly earned trust at once.

For one more rollback-ramp beat, point at the Recommended rollback load ramp card in the case file and show that RelayOps can say exactly when the recovered lane or donor can widen from guarded return to metered live, surge-ready, and full-open duty again instead of reopening too fast on vibes alone.

For one more rollback-drift beat, point at the Current rollback ramp guard card in the case file and show that RelayOps can say whether the current widening line is already overspending its safe corridor, how much live / surge headroom is really left, and whether the team should hold, tighten, or freeze before another rollback starts.

For one more rollback-relief beat, point at the Recommended rollback ramp relief card in the case file and show that RelayOps can rank which move buys back the most corridor headroom fastest — tightening caps, freezing widening, or loading a different branch — instead of improvising under drift.

For one more rollback-policy beat, point at the Recommended rollback ramp policy card in the case file and show that RelayOps can explain which doctrine should win right now — corridor-first, donor-first, output-first, or low-churn — before the supervisor commits to a headroom buy-back move.

For one more rollback-doctrine beat, point at the Recommended rollback ramp doctrine trigger card in the case file and show that RelayOps can explain when the current doctrine should hold, when a flip is merely armed, and when the corridor has changed enough that output-first or low-churn should give way to donor-first or corridor-first control.

For one more rollback-latch beat, point at the Recommended rollback ramp latch rule card in the case file and show that RelayOps can explain how long the current doctrine should stay latched, when a stricter flip needs buffering instead of a panic swap, and when output-first or low-churn can safely come back without doctrine thrash.

For one more rollback-cadence beat, point at the Recommended rollback ramp doctrine review cadence card in the case file and show that RelayOps can explain whether the current doctrine is still fresh, how quickly confidence is decaying, and when the supervisor should force a new doctrine review before stale corridor proof drives the next move.

For one more rollback-proof beat, point at the Recommended rollback ramp doctrine proof gate card in the case file and show that RelayOps can explain whether the active doctrine is trusted now, only trusted under guard, or still blocked on proof debt, stale packets, or missing certification marks.

For one more rollback-arbitration beat, point at the Recommended rollback ramp doctrine arbitration card in the case file and show that RelayOps can explain which doctrine should actually govern after score, latch discipline, freshness, and proof are combined — not just which doctrine looks best in isolation.

For one more rollback-succession beat, point at the Recommended rollback ramp doctrine succession card in the case file and show that RelayOps can explain which doctrine governs now, which stricter doctrine owns the hotter handoff, which lighter doctrine only returns after calm survives, and which repair doctrine should anchor revalidation if proof or freshness weakens.

For one more visible collaboration beat, add an action comment mentioning the next shift, then point out that it enters the same local queue / sync lifecycle and stays visible in the inbox until resolved.

For one more command-center beat, point at the Shift handoff command board and show that RelayOps can tell the next shift what matters most at the site level before anyone opens a single incident manually.

Extra artifacts for judges

  • SETUP.md — local bootstrap, optional live telemetry mode, and hosted-path notes
  • DEMO_SCRIPT.md — concise 2–3 minute demo narration
  • HACKATHON_SUBMISSION_SCORECARD.md — official requirement/rubric mapping plus final submission checklist
  • ARCHITECTURE_SUBMISSION_SUMMARY.md — copy-ready architecture narrative for submission forms
  • SUBMISSION_FORM_PACKAGE.md — copy/paste form answers, category picks, and final submit checklist
  • SCHEMA.md — relational model and stream mapping
  • sample-data/relayops-demo-scenarios.json — portable seed scenario summary
  • screenshots/README.md — suggested capture list for the final submission page

Bonus-category mapping

Best Local-First Submission

  • PowerSync-backed local state is the live app model
  • sync queue and reconciliation are visible in UX

Best Submission Using Supabase

  • Supabase is the intended backend source of truth and auth/storage path
  • the repo includes the client scaffolding and architecture for this integration

Best Submission Using Mastra

  • incident triage is routed through a Mastra workflow endpoint
  • AI output is structured and written back into operational state

Best Submission Using TanStack Start/DB/AI

  • TanStack Start is the full-stack shell for the product
  • the app is intentionally built around typed routes and server handlers
  • scope remains disciplined instead of bolting on unstable extras

Best Submission Using Cactus

  • the voice transcription layer is abstracted so Cactus can be the primary production transcription path
  • the current demo includes a safe local fallback to preserve reliability

Scope tradeoffs

This repo intentionally prioritizes a sharp demo over broad completeness.

Included now:

  • local-first incident creation
  • visible sync progression
  • supervisor approval workflow
  • audit trail
  • hybrid retrieval assistant

Deferred for the next milestone:

  • real connector upload logic
  • Supabase auth + RLS
  • binary voice upload
  • pgvector-backed retrieval

More detail

See ARCHITECTURE.md for the full system breakdown and design rationale.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors