Ampera — AI-Powered Predictive Maintenance for EV Charging Infrastructure
Inspiration
The idea for Ampera came from a frustrating truth about EV charging: failures are often silent until they become public. A charger can degrade for hours (or days) — voltage drifting, temperature creeping, error codes ticking up — and no one notices until a driver shows up, gets an out-of-service screen, and leaves angry.
That moment is bigger than a bad UX. Every broken charger is a tiny vote against EV adoption.
So we asked a simple question:
Why are we waiting for outages to tell us something is wrong, when the signals are already there?
Ampera is our answer: turn reactive maintenance into predictive maintenance, using live telemetry + anomaly detection + an AI assistant that explains what’s happening and what to do next.
The Problem
EV charging networks are critical infrastructure, but they fail constantly and silently.
- Operators often learn about downtime from customer complaints, not their own monitoring.
- Reactive workflows mean expensive truck rolls, slow triage, and repeat failures.
- Each hour of downtime costs revenue, erodes trust, and discourages EV adoption.
Traditional monitoring is mostly thresholds and alerts — after something breaks.
Ampera makes it predictive.
The Solution
Ampera is a real-time intelligence platform that:
- Monitors live charger telemetry (voltage, current, temperature, session duration, error codes)
- Detects patterns indicating early failure using a machine learning model
- Produces a continuously updated risk score per charger
- Automatically logs anomalies and incidents for auditability
- Provides an AI triage assistant that explains why and recommends exact actions
The goal is simple:
Catch the failure before the driver ever sees it.
Core Features
1) Live Network Dashboard
A fleet-wide, at-a-glance view of charger health:
- Green = healthy
- Yellow = watch
- Red = act now
Operators don’t dig through logs; they see risk instantly.
2) Predictive Anomaly Detection
Ampera continuously scores each charger for failure risk using unsupervised anomaly detection:
- Input: streaming telemetry (with realistic noise)
- Model: Isolation Forest (scikit-learn)
- Output: ( 0 \rightarrow 100 ) risk score, refreshed every few seconds
Instead of “alert when it’s broken,” we aim for:
predictive signal → maintenance action → no downtime
3) AI Triage Assistant
An embedded chat assistant powered by an LLM (Claude or OpenAI API).
Operators ask:
- “Why is charger 12 flagged?”
- “What should I do about the overheating unit in Lot B?”
- “Is this likely a sensor issue or a real thermal problem?”
Each query is injected with the charger’s recent metrics + anomaly type, so responses are specific, actionable, and not generic.
4) Incident Timeline
Every anomaly, escalation, and resolution is logged with:
- timestamps
- severity
- who acted and what they did
This creates an automatic audit trail without extra paperwork.
5) Charger Detail View
Click any charger to see:
- live metric graphs (voltage / temperature / sessions / errors)
- current risk score
- model “reasoning” signals (e.g., which metrics drifted)
How We Built It
System Architecture (Demo-Ready)
Everything runs locally for hackathon reliability and speed.
- Simulator generates real-time telemetry for 20 chargers
- ML pipeline calculates anomalies + risk scores
- FastAPI backend serves REST endpoints to the dashboard and logs incidents
- React ops dashboard renders fleet health, detail graphs, and timelines
- LLM assistant answers questions using charger-specific context injection
Tech Stack
Frontend
- React + Tailwind CSS
- Recharts for live charts
- Dark-mode ops dashboard aesthetic (fast scanning, low noise)
Backend
- Python + FastAPI
- REST endpoints for:
- telemetry stream
- risk scores
- incident logs
- assistant query endpoint
Data Simulation
- Python script streaming telemetry for 20 chargers
- 3–4 chargers have injected failure patterns:
- gradual voltage drops
- temperature spikes
- increasing error rates
Machine Learning
- Isolation Forest for unsupervised anomaly detection
- Risk scoring pipeline converts model output into a human-readable health score
AI Assistant
- Claude or OpenAI API
- Prompt injection pattern:
- recent metrics window (e.g., last 10–15 minutes)
- current risk score + direction (rising/falling)
- detected anomaly category (thermal / voltage drift / error burst)
- operational playbook-style response format
Risk Scoring (Concept)
Isolation Forest produces an anomaly score based on how “different” the current telemetry looks compared to typical behavior.
We then map that into a (0!-!100) risk score:
$$ \text{risk} = \text{clip}_{0}^{100}\Big(\alpha \cdot \text{anomaly_score} + \beta \cdot \text{trend_penalty}\Big) $$
- anomaly_score: how unusual the current point/window is
- trend_penalty: extra weight if key metrics are drifting steadily (the “silent failure” signature)
- clip keeps it bounded and dashboard-friendly
Finally, we bucket for ops clarity:
- 0–39: Healthy (Green)
- 40–69: Watch (Yellow)
- 70–100: Act (Red)
Challenges We Faced
1) Making simulated data feel real
Random noise isn’t enough — real infrastructure fails with patterns. The hardest part was designing failure injection that looked believable:
- slow drift (voltage sag over time)
- correlated changes (temperature + error rate rising together)
- intermittent resets (fake “it went away” moments that fool threshold alerts)
2) Avoiding “anomaly spam”
Unsupervised models can over-flag. We had to tune:
- contamination / sensitivity
- windowing strategy
- smoothing of risk scores so the dashboard doesn’t flicker
We wanted “predictive and calm,” not “noisy and anxious.”
3) Getting the assistant to be actionable
LLMs love being verbose. Operators need:
- the likely cause
- severity
- next actions (what to check, what to reboot, what to dispatch)
- when to escalate
Prompt structure mattered more than model choice.
4) Real-time UX
A live ops dashboard can become chaotic quickly. We focused on:
- stable layout
- clear color semantics
- fast drill-down
- incident log that reads like a story, not a dump
What We Learned
Predictive maintenance is a product problem, not just an ML problem.
The model is useless if the operator can’t act confidently.Explainability beats raw accuracy in demos.
Judges (and operators) trust systems that show why.Time-series “drift” is where the value is.
A single spike is easy. Catching “it’s slowly getting worse” is the win.LLMs shine as a bridge between signals and actions.
Telemetry → plain-English triage → checklist actions is the magic.
Team & Division of Work
Backend Dev 1 — Data & ML
- Built telemetry simulator (20 chargers, noise + injected failures)
- Trained/tuned Isolation Forest
- Built risk scoring pipeline
- Output clean JSON for API consumption
Backend Dev 2 — API & AI
- Built FastAPI endpoints for telemetry, scores, and incidents
- Integrated LLM triage assistant
- Wrote system prompt + context injection format
- Incident creation logic when thresholds are crossed
UI/UX Designer — Frontend & Presentation
- React dashboard (network grid, detail view, charts)
- Chat assistant sidebar
- Incident timeline feed
- Devpost page, screenshots, and demo narrative
Build Timeline (22 Hours)
- Hours 0–2: Setup, simulator emitting data, API skeleton, wireframes
- Hours 2–6: Model trained, endpoints live, dashboard shell consuming API
- Hours 6–12: Live scoring + color coding, detail graphs, assistant working
- Hours 12–18: Incident timeline, refined assistant, UI polish + edge cases
- Hours 18–22: Full demo rehearsal, Devpost + README, speaker flow locked
Demo Narrative (2 Minutes)
“340 million EVs are projected to be on the road by 2030. The infrastructure holding them up is breaking silently every day. Ampera fixes that.”
- Open the dashboard: show a mostly green network.
- Watch charger 7 trend yellow as voltage drops and temperature rises.
- Ampera flags it and creates an incident automatically.
- Ask the assistant:
“What’s wrong with charger 7 and what should I do?” - Show the response (cause + steps) and the metric history proving the trend started hours ago.
- Close:
> “Ampera caught this before any driver ever saw an out-of-service screen.”
Track Strategy
- Best Data Science: emphasize anomaly detection + time-series drift + risk scoring
- Best Design (With Code): the dashboard is the differentiator — instant clarity
- Best Social Good: uptime is EV adoption; this is climate infrastructure
- Best Overall: full-stack + ML + LLM + polished UI in 22 hours
Why Ampera Wins
Most hackathon projects are either:
- technically impressive but ugly, or
- beautifully designed but shallow
Ampera is both:
- real ML signal (predictive, not reactive)
- real product UX (dashboard that feels shippable)
- real “wow” (assistant that turns metrics into action)
You’re not demoing a prototype.
You’re demoing a product.
Built With
- ai
- fastapi
- isolation-forest
- llm-powered
- numpy
- openai-api-/-claude-api
- pandas
- python
- react
- real-time-telemetry-simulation
- recharts
- rest-api
- scikit-learn
- sql
- supabase-(postgresql)
- tailwind-css
- tcn
- typescript
- uvicorn
Log in or sign up for Devpost to join the conversation.