Track
Education
Project Description
LanGo is a real-time, on-the-go translating headset that aims to make learning new languages engaging and immersive with live audio and visual feedback. The LanGo headset has two modes:
- Learn: This mode translates the objects you point to in real-life in order to help you study everyday vocabulary in the language of your choosing.
- Game: This mode tests your memory by telling you the translated word of a random object in front of you. You must then point to the object to correctly identify the object.
System Architecture
LanGo has four main layers.
The Raspberry Pi interface in pi_screen.py is the device-side control surface. It shows the current language, pending detections, and mode settings, and it writes mode/language changes back to the backend.
The vision/runtime layer in object-detection.py runs the camera, hand tracking, and YOLO object detection. It operates in learn or game mode, polls the persisted mode from the backend, and submits detections when appropriate.
The backend/API layer in server.py is the hub between device, detector, and web UI. It exposes endpoints for pending detections, confirmation/rejection, translation history, device language, and device mode.
The data layer uses SQLite through translation_store.py. It stores translation history and the current device mode. The web app in frontend reads that backend state and displays confirmed translation history.
End-to-end flow:
YOLO detects an object in object-detection.py. The detection is submitted to the backend as a pending item. The Pi UI in pi_screen.py lets the user save or discard it. On save, the backend writes it into SQLite. The web app fetches that stored entry and displays it in translation history.
Log in or sign up for Devpost to join the conversation.