Inspiration
CueLens came from something that happened on our way to the hackathon. As we were approaching the U.S. border, we realized one of our teammates forgot their passport and we had to turn around. It sounds small, but in the moment it was stressful and frustrating, and it got us talking about how easy it is to feel lost when your memory fails you. That conversation got us thinking about how situations like this could be avoided, not just once in a while, but for people who deal with dementia every day. We wanted to build something that steps in at the right moment and helps without making the person feel embarrassed or overwhelmed.
What it does
CueLens acts like a quiet memory assist that lives in your field of view. It recognizes familiar places and everyday objects and shows short, calming cues when they matter. Things like reminding you that you are at home, that you are at the pharmacy, that the stove is on, or that you already took your medication today. Family members or caregivers can customize these cues through a mobile app so everything is personal and familiar. The idea is not to flood someone with information, but to give just enough reassurance to help them feel confident and grounded.
How we built it: With the focus of improving the experience of those with dementia and their caretakers.
Frontend (Web Application) Framework: Next.js 14.1.0 (App Router) UI Library: React 18.2.0, React DOM 18.2.0 Language: TypeScript 5.3.3 (strict mode) Styling: Tailwind CSS 4.0 (via @tailwindcss/postcss) AI/Vision Libraries: face-api.js 0.22.2 (face recognition) @tensorflow/tfjs 4.22.0 (ML/face processing) @overshoot/sdk 0.1.0-alpha.2 (vision analysis) Backend (API Server) Runtime: Node.js 18+ (via engines) Framework: Express 4.18.2 Language: TypeScript 5.3.3 (strict mode, ESM) Runtime Execution: tsx 4.7.0 (dev), compiled TypeScript (production) WebSockets: ws 8.18.0 (OpenAI Real-Time API proxy) Validation: Zod 3.22.4 File Upload: Multer 1.4.5-lts.1 HTTP Client: form-data 4.0.0 AI Integration: OpenAI SDK 4.28.0 (Real-Time API, transcription, vision) Middleware: CORS 2.8.5, dotenv 16.4.5 Build Tools & Package Management Package Manager: pnpm 8.15.0 (workspaces) Monorepo: Turborepo 1.12.0 (task orchestration, caching) Code Quality: ESLint 8.56.0 Prettier 3.2.4 TypeScript compiler Shared Packages (Monorepo) @cuelens/shared - Shared TypeScript types & Zod schemas @cuelens/eslint-config - Shared ESLint configuration @cuelens/tsconfig - Shared TypeScript configs (base, Next.js, Node.js) External Services & Integrations Overshoot SDK - Vision analysis for camera feed OpenAI API - Real-Time API (WebSocket), transcription (STT), vision analysis Browser APIs: MediaStream (webcam), WebSocket (real-time) Development & CI/CD CI/CD: GitHub Actions (lint, typecheck, build) Version Control: Git (GitHub) Data Storage Currently in-memory stores (people, suggestions) Database: TBD (PostgreSQL, MongoDB, or graph database planned) Communication Protocols REST API - Express routes (HTTP/JSON) WebSockets - Real-time communication (OpenAI Real-Time API proxy) HTTP/HTTPS - Standard API communication Architecture Pattern Monorepo: pnpm workspaces + Turborepo Code Organization: Apps (web, api) + shared packages Type Safety: Shared contracts between frontend and backend via @cuelens/shared
Challenges we ran into
We faced challenges working with a new tech stack and setting up CI/CD pipelines on Git, which took time and practice to get right as a newer team. The largest challenge we incurred was getting the speech-to-text to understand context well enough to create a new person profile was harder than expected. Turning natural, unstructured speech into something the system could reliably act on required more iteration than we planned during the hackathon.
Accomplishments that we're proud of
What mattered most to us was getting feedback from a caretaker of a family member with Dementia and hearing that this is something they would actually want. That made the idea feel real. We are also proud of how far we got in such a short time and that this is not just a hackathon project for us. With the right hardware and continued work, this will genuinely help someone day to day.
What we learned
We learned that when you try to solve a problem you personally relate to, you usually end up helping a lot more people than you expect. Small frustrations often point to much bigger issues, especially in healthcare and accessibility. Building with empathy first leads to more meaningful ideas and launching more scaleable, practical solutions.
What's next for CueLens
We want to connect CueLens to AR hardware. With the app and core logic in place, Meta Glasses could make it easy for people who suffer from dementia. Secondly, we want to continue improving facial recognition, allowing for stronger caregiver controls and presets, and lastly begin beta testing with real users to make sure the experience stays simple, calm, and genuinely helpful.
Built With
- express.js
- face-api
- javascript
- nextjs
- openai-api
- pnpm
- react
- tailwind
- tensorflow
- typescript
Log in or sign up for Devpost to join the conversation.