🚀 HivePath AI — The Self‑Healing Logistics Platform

We don’t just route — we adapt. It is accessibility‑aware, risk‑aware routing engine that “sees” every stop via street‑level imagery, predicts service times with a graph brain, and continuously re‑optimizes with a swarm of agents as traffic, weather, and safety change.


TL;DR

Trucks waste time, fuel, and money because routes are static, streets have real-world constraints (height/curb/access), and loads are poorly planned.

HivePath AI is a web app that self-heals delivery plans in real time using a Knowledge-Graph + GNN brain, a swarm of 20+ inspector agents over maps and street imagery, and a dynamic VRP/knapsack optimizer.

In city-scale simulations, we show fewer late deliveries, higher truck fill rates, and lower risky miles—meaning faster deliveries, lower costs and fuel, and lower CO₂.


💡 Inspiration

Every day, heavy vehicles circle blocks, hit height/curb limitations, take inefficient roads, stop unnecessarily, and deliver half-empty. Static routers don't "see" reality or reprioritize high-value consignments when things change. We wanted a system that thinks while it drives.

  • Up to 31% of delays stem from poor last‑meter accessibility and unrealistic service‑time estimates.
  • Route plans rarely account for real‑time risk (weather/incident/crime) at the segment level.
  • Existing tools are static and reactive. We wanted a self‑healing system.

🌟 What It Does

HivePath AI combines three AI modalities end‑to‑end:

  1. Visual Intelligence Engine • Fetches multi‑angle street‑level imagery for each stop • Detects ramps, stairs, curb cuts, loading zones, signage, hazards • Produces a 0‑100 Accessibility Score + structured features

  2. Knowledge Graph Brain (GNNs) • A living graph (locations, vehicles, drivers, time, context) • Service‑time prediction (learned from history, time of day, weather) • Risk scoring for segments with external signals (incidents, crime, weather)

  3. Swarm Perception + Self‑Healing Optimization • Lightweight 20+ Inspector Agents watch traffic, weather, incidents in parallel • An Architect Agent decides when to re‑solve VRP • OR‑Tools multi‑objective solver balances cost ⟷ time ⟷ risk ⟷ accessibility

📈 Impact on our testbed

  • 18% efficiency gain vs. baseline planners
  • 31% fewer late deliveries via better service‑time prediction
  • 36% reduction in risky distance
  • 7.4% lower operational cost and CO₂

Note: Metrics measured on our internal Boston‑area testbed and synthetic workloads created during the hackathon.


🧠 How We Built It

Five‑Layer Architecture

  1. Data Ingestion & Signals Maps (distance, traffic), Street‑level imagery, Weather, Public safety/incident feeds, and a small custom accessibility dataset.

  2. AI Processing

  • Computer Vision (OpenCV + BLIP‑style captioning) to extract access features
  • Graph Neural Networks (PyTorch/Lightning) for service‑time & risk prediction
  1. Optimization Core
  • OR‑Tools VRP with custom penalty terms and weighted objectives
  • Warm‑start clustering and LRU caches for repeated subproblems
  1. Analytics & Monitoring
  • Latency/stability dashboards, solve‑quality KPIs, carbon estimates
  1. Frontend (SvelteKit + Tailwind + Three.js)
  • Live 2D/3D route visualization, accessibility overlays, swarm state, and “re‑opt triggers.”

🔌 Try It (Sample API)

POST /api/v1/optimize/routes
{
  "locations": [
    {"id":"depot","lat":42.3601,"lng":-71.0589,"type":"depot"},
    {"id":"stop1","lat":42.3611,"lng":-71.0599,"type":"delivery"},
    {"id":"stop2","lat":42.3621,"lng":-71.0609,"type":"delivery"}
  ],
  "vehicles": [
    {"id":"t1","capacity":50,"start_location":"depot"},
    {"id":"t2","capacity":40,"start_location":"depot"}
  ],
  "constraints": {
    "max_route_time":480,
    "prioritize_accessibility": true,
    "avoid_high_risk_areas": true
  },
  "preset": "balanced"  // ultra_fast | balanced | high_quality
}
GET /api/v1/predictions/service-times?location_id=stop1&weather=rain&time_of_day=14:30
POST /api/agents/swarm
{
  "action":"deploy",
  "data":{"center":{"lat":42.3601,"lng":-71.0589},"agents":5,"strategy":"grid"}
}

🧩 Deep Dive (Unique Bits)

Visual Intelligence for Logistics Multi‑angle imagery (0°, 90°, 180°, 270°) → access features → Accessibility Score feeding the solver.

Graph Brain (GNN) A dynamic knowledge graph (entities ≈1,247, relations ≈3,891) informs service‑time and risk predictions with context (hour‑of‑day, weather, neighborhood).

Swarm + Self‑Healing Inspector agents stream signals; an Architect agent decides if/when to trigger a re‑solve. Results push to the driver UI in real time.

Multi‑Objective OR‑Tools We add accessibility/risk into cost functions and respect operational constraints (time windows, capacities, pickup‑delivery pairs). Warm‑starts and caching cut solve times ~43%.


🐝 Swarm Perception Network (Devpost‑ready)

What it is (in one line)

A distributed set of lightweight agents that continuously sense the world (traffic, weather, incidents, accessibility), decide if/when to re‑plan, and trigger surgical re‑optimization so routes self‑heal without dispatcher babysitting.


Roles & Flow

1) Inspector Agents (many, specialized) • TrafficInspector — segment‑level speed/ETA shock detection • WeatherInspector — rain/wind/temperature thresholds along time windows • SafetyInspector — incident/crime feed changes near stops/edges • AccessInspector — curb/parking/ramp status from vision/cache • OpsInspector — driver events (delay, failure, break/shift rules)

2) Perception Bus (pub/sub) Normalized events published by inspectors:

{
  "type": "TRAFFIC_SPIKE",
  "route_id": "r12",
  "edge_id": "e_42",
  "delta_eta_sec": 420,
  "confidence": 0.91,
  "observed_at": "2025-10-05T04:12:00Z"
}

3) Architect Agent (decision brain) Fuses events, queries the Knowledge Graph, and chooses local micro‑patch (swap 1–2 stops) vs global re‑solve, using hysteresis, bandit gating, and a route‑churn budget.

4) Optimizer (OR‑Tools, multi‑objective) Re‑plans with updated weights for cost ⟷ time ⟷ risk ⟷ accessibility, penalizes unnecessary change, pushes a minimal diff to the UI/driver.


Why it’s novel (and practical)

  • Event‑driven re‑planning (no blind polling)
  • Bandit‑gated triggers learn which signals are worth re‑opts
  • Route‑churn budget protects driver stability
  • Surgical re‑opts first; escalate to full solve only when ROI clears threshold
  • Explainable diffs: every change cites the events/weights that caused it

Mini Algorithms (judge‑level detail)

Inspector loop (any signal)

while True:
    signal = read_provider()
    delta = detect_material_change(signal, last_state)
    if delta and confidence(delta) > TAU:
        publish_to_bus(normalize(delta))
    last_state = signal
    sleep(jitter(1, 3))

Architect decision (bandit + hysteresis + churn control)

def should_reopt(event_batch):
    score = 0
    for e in event_batch:
        score += w[e.type] * e.magnitude * e.confidence
    score = hysteresis_filter(score, prev_score)
    exp_gain = bandit.estimate_gain(context=features(event_batch))
    return (score + exp_gain) > REOPT_THRESHOLD and churn_remaining() > 0

Surgical re‑opt (keep plan stable)

def surgical_reopt(plan, affected_nodes):
    lock_all_but(plan, neighborhood(affected_nodes, radius=2))
    set_objective_weights(alpha_cost, beta_time, gamma_risk, delta_access)
    add_penalty_for_deviation(plan, LAMBDA_STABILITY)
    return or_tools.solve(plan, time_limit=1500)  # ms

Route‑churn budget

def churn_remaining():
    return MAX_MOVES_PER_HOUR - moves_applied_last_60min

Metrics to Show (and track live)

  • Re‑opt ROI = (ETA_saved + risk_reduced) / instruction_changes
  • Stability = 1 − (Levenshtein(old_order, new_order) / N)
  • Trigger precision = % of re‑opts meeting KPI (e.g., ETA ≥ −5%)
  • Latency = event → new plan (p50 < 2s; p95 < 6s on our testbed)
  • Churn = stop moves/hour (stay ≤ agreed budget, e.g., 3)

Ablation (30‑sec slide): Baseline VRP → +Vision → +GNN → +Swarm; plot stepwise gains in on‑time %, risky distance, churn.


⚙️ Tech Stack

Backend: Python, FastAPI, OR‑Tools, PyTorch, OpenCV, Redis AI/ML: GNNs (service time/risk), BLIP‑style captions, classical CV Data/APIs: Google Maps & Street‑level imagery, OpenWeather, public safety feeds Frontend: SvelteKit + TypeScript, Tailwind, Three.js, Map/GL Infra: Docker, simple autoscaling, observability hooks

Built With tags: python • fastapi • pytorch • ortools • opencv • sveltekit • typescript • tailwindcss • threejs • redis • docker


🏆 Accomplishments

  • A first‑of‑its‑kind accessibility‑aware, vision‑augmented routing stack
  • Working Swarm Perception that actually triggers re‑optimization
  • Sub‑10ms median API handler latency on cached read paths; solver warm‑starts for fast re‑plans
  • A clean, interactive dashboard judges can use in seconds

🧪 Challenges

  • Street‑level imagery at scale → solved via batching + caching and careful quota use
  • GNN data sparsity → mitigated with transfer learning and synthetic augmentation
  • When to re‑solve → learned thresholds in the Architect agent to avoid thrash
  • Objective balancing → tuned weights for cost/time/risk/accessibility to match ops reality

📚 What We Learned

  • Multi‑modal AI (vision + graphs + agents) produces qualitatively better logistics decisions.
  • Accessibility is not a nice‑to‑have; it’s a root cause of delay and rework.
  • Explainability matters — overlays and factor attributions build trust with dispatchers.

🔭 What’s Next

3 months: Cloud deployment, mobile driver app, better explainability for service‑time predictions. 6–12 months: IoT/vehicle telemetry, warehouse → last‑mile coupling, demand forecasting. 12–24 months: Cross‑city scaling, partner integrations, autonomous‑ready policies.


🔒 Safety, Privacy & Ethics

  • Respect imagery/API ToS; we cache features, not raw photos.
  • Risk scores are contextual and auditable; we avoid stigmatizing outputs.
  • PII‑light design; drivers can opt‑out of data sharing.

Built With

Share this project:

Updates