About the project
Inspiration
Alzheimer’s disease and cognitive decline affect not only memory recall but also how people interpret their physical environment. Objects that once carried personal meaning can lose context, increasing confusion and anxiety during everyday interactions.
Research in reminiscence therapy shows that memory retrieval is strongly tied to physical context and familiar objects. Instead of relying on abstract recall, we explored whether computer vision and contextual interfaces could allow the environment itself to reinforce memory through gentle, situational cues.
Memory Anchors was designed to transform everyday surroundings into cognitive support interfaces by associating personal memories with recognizable objects in real time.
What we built
Memory Anchors is a spatial memory interface that overlays personal memories onto real-world objects detected through a live camera feed. The system uses computer vision to recognize familiar household objects and attaches contextual memory cards to them in real time.
When a user points their device at an object such as a chair, book, or television, the application detects the object and renders a memory card anchored to the detection region. These memory anchors can be created in advance by caregivers or family members and may include text, narration, or additional media context.
The system combines real-time object detection, contextual memory retrieval, and voice narration to create a lightweight augmented interaction model. Instead of requiring users to actively recall information, Memory Anchors reinforces familiarity through situational cues tied to the physical environment.
From a systems perspective, Memory Anchors integrates perception (object detection), retrieval (memory lookup), and assistive interaction (narration) into a single loop. The environment becomes a context-aware interface that supports memory through recognition rather than recall.
This prototype demonstrates how spatial computing and assistive AI can transform everyday objects into cognitive anchors that help preserve identity and reduce confusion in early-stage memory decline.
How it works
The frontend is implemented in React and runs entirely in the browser. TensorFlow.js with the COCO-SSD object detection model performs real-time inference on the camera stream. Detection results are rendered on a canvas overlay synchronized with the video feed.
When a supported object label is detected, the application queries a memory service and displays a memory card positioned relative to the detected bounding box. Detection stabilization logic reduces flicker by requiring consistent detections across frames.
A FastAPI backend provides a memory anchor service indexed by object label. The service exposes endpoints for retrieving, creating, and updating memory anchors, enabling caregivers to pre-associate memories with specific household objects.
A voice narration module built using the Web Speech API provides accessible playback of memory text directly within the interface.
From a systems perspective, the application combines:
- in-browser computer vision inference
- contextual UI rendering on a live camera stream
- a memory retrieval service
- assistive voice interaction
This creates a lightweight augmented interface that connects physical objects to personal memory context.
Challenges
Running object detection in real time inside the browser required careful handling of inference frequency and rendering synchronization to avoid unstable overlays and UI flickering. We implemented detection smoothing and fallback interaction modes to maintain reliability during demonstrations.
Another challenge was designing an interaction model appropriate for cognitive support. Interfaces had to remain predictable, low-distraction, and accessible while still demonstrating the technical capabilities of real-time computer vision and contextual overlays.
What we learned
We learned that assistive AI systems benefit more from reliability and contextual clarity than from model complexity. Integrating computer vision with memory retrieval demonstrated how physical environments can become meaningful interaction surfaces when AI is applied thoughtfully.
The project also highlighted how lightweight camera-based AR interfaces can be implemented entirely in the browser without specialized hardware or native frameworks.
Built With
- fastapi
- react
- supabase
- tensorflow
Log in or sign up for Devpost to join the conversation.