📖 Project Story: CampusNav AR
🚀 Inspiration
Every school has one universal experience: getting lost in the hallways.
For new students, anxious students, neurodivergent learners, and those with mobility needs, navigating a school isn’t just confusing — it’s overwhelming. Traditional tools like printed maps or static signs don’t adapt to accessibility requirements such as avoiding stairs or crowded hallways.
We asked ourselves:
What if indoor navigation in schools worked like Google Maps — but was actually accessibility-first?
That question led us to CampusNav AR, a system that reads real room numbers directly from the environment using OCR and computes accessible routes through a graph-powered map of the school.
💡 What It Does
CampusNav AR enables students to navigate their school using just their phone camera. The app:
- Detects actual room numbers using live OCR (Tesseract.js)
- Determines the user’s location on a digital school graph
- Computes an optimal route using Dijkstra’s algorithm
- Supports accessibility filters:
- No stairs
- Wheelchair-accessible only routes
- Avoid crowded areas (via anonymous segment pings)
- Gives navigation instructions with:
- Large, high-contrast visual cues
- Text-to-speech for low-vision and neurodivergent users
- Large, high-contrast visual cues
Best part?
No markers, no QR codes, no hardware — just the room numbers already on the walls.
🧠 How We Built It
1. Map Builder Mode
We began by uploading bird’s-eye floorplans for Floor 1 and Floor 2. Using our custom in-browser UI, we:
- Clicked to add nodes (rooms, junctions, stairs, elevators)
- Connected them with edges (hallways, stair links, elevator connections)
- Assigned distances + “via” types (e.g., hallway, stairs, elevator)
- Exported everything into a JSON graph file
Mathematically, our model looked like:
- Nodes
\( N = { \text{rooms}, \text{junctions}, \text{stairs}, \text{elevators} } \) - Edges
\( E = { (u, v, d, \text{via}, \text{segmentId}) } \) - Room lookup
\( roomToNode[r] = n \)
This graph drives all navigation logic.
2. Camera Navigation Mode
We used JavaScript + Tesseract.js to recognize room numbers directly from the camera feed.
OCR pipeline:
const { data: { text } } = await Tesseract.recognize(canvas, "eng");
const matches = text.match(/\b\d{3,4}\b/g);
Once a valid room number is detected:
- We map it to a graph node
- Run Dijkstra with accessibility constraints
- Generate a step-by-step route
- Display a clear arrow + instruction
- Speak the instruction aloud using the Web Speech API
3. Accessibility-Aware Pathfinding
We implemented a weighted Dijkstra:
\[ \text{cost}(e) = \text{distance}(e) + \alpha \cdot \text{crowdDensity}(e) \]
Constraints:
- If
noStairs = true, stair edges get cost = \(\infty\) - If
wheelchairOnly = true, stair edges are completely excluded - If
avoidCrowds = true, hallways with higher density get higher penalties
This allows highly personalized and accessibility-aware routing.
🧩 Challenges We Faced
Real-time OCR speed
Tesseract.js is powerful but heavy. We had to:
- Downscale camera frames
- Run OCR every ~2 seconds
- Clean text aggressively to reduce false positives
Graph modeling of a building
Turning a messy floorplan into a functional graph with:
- Junctions
- Multi-floor transitions
- Reliable distances
was much harder than expected — which motivated building our Map Builder tool.
Balancing accessibility & efficiency
The shortest path is not always the best path.
Designing the algorithm to respect mobility constraints and sensory needs required careful weighting and edge filtering.
Simple, non-overwhelming UI
We optimized for clarity:
- High-contrast visuals
- Large arrows
- Low cognitive load
- Optional voice guidance
📚 What We Learned
- How to run OCR in the browser with real-time video
- How to convert building layouts into graph structures
- How to modify Dijkstra for constraints + penalties
- How accessibility dramatically shapes UX decisions
- How to build a navigation tool that works anywhere with zero setup
🔧 Built With
- JavaScript (Vanilla)
- Tesseract.js (OCR engine)
- HTML5 Canvas (frame processing)
- Web Speech API (TTS navigation)
- getUserMedia API (camera access)
- Custom Dijkstra pathfinding implementation
- JSON-based map graph builder
- CSS for clean UI/UX
- (Optional future) Firebase for real-time crowd density
Built With
- css3
- html5
- javascript
- json
- tesseract.js
- web
Log in or sign up for Devpost to join the conversation.