🌟 Inspiration

LiveSight was inspired by the needs of hardworking makers, engineers, and creators who spend hours immersed in hands-on projects. Whether soldering circuits, machining parts, or building intricate designs, checking personal health is often a hassle and overlooked. Traditional smartwatches and fitness trackers aren’t always practical in these scenarios as they get in your hands' way. By integrating a miniature, flip-down display into smartglasses, LiveSight provides a seamless, hands-free way to monitor key vital signs and access AI-powered insights without interrupting the creative flow.

⚙️ What It Does

LiveSight is a pair of smartglasses designed for makers and creators who need instant access to critical information while keeping their hands free. It features a discreet, retractable display that appears in the user’s peripheral vision when needed, providing real-time heart rate and body temperature monitoring. After hours of intense work, users can quickly check their health without stopping. Additionally, an API call to ChatGPT allows users to ask technical or general questions and receive responses directly on the display, making LiveSight a powerful tool for productivity, safety, and hands-on problem-solving.

🛠️ How We Built It

  1. Tested each module and sensor, including the heart rate sensor, temperature sensor, microphone, and LCD display, individually to ensure functionality.
  2. Measured the dimensions of the glasses and 3D-modeled the mechanical design of the ball-bearing hinge and the antenna with display securement.
  3. Used the Whisper API to detect speech and translate it into text, then made an API call with the translated text to ChatGPT-4, resulting in a response.
  4. Established serial communication between the Arduino Uno and the Raspberry Pi, enabling the response to be transmitted to the UNO for LCD display.
  5. Built and integrated the hardware components, including all sensors and the display, onto the physical 3D-printed model and glasses.
  6. Secured the model onto a test subject and conducted comprehensive testing.

❗Challenges We Ran Into

One of the biggest challenges we faced was with our first microphone module. The resolution was too low, making the captured audio data unsuitable for conversion into text. This prevented us from using voice commands effectively. To solve this, we switched to a higher-quality 3.5mm jack microphone connected directly to the Raspberry Pi, which significantly improved audio capture and transcription accuracy.

We also initially tried using an ESP32 for processing and network communication, but we ran into multiple issues. The Wi-Fi connection was unstable, and the board ran out of available pins when using Wi-Fi alongside other peripherals. Additionally, it struggled to handle audio recordings efficiently. Due to these limitations, we decided to switch to a Raspberry Pi, which provided better performance and stability for our needs.

Another major hurdle was getting the display to work with the Raspberry Pi. The available libraries for our display were outdated and incompatible, preventing us from rendering any visuals. To overcome this, we restructured our system by having the Raspberry Pi process the data and send it to an Arduino, which then handled the display output to the LCD. This workaround allowed us to integrate the display successfully while maintaining system functionality.

🏆Accomplishments That We're Proud Of

  • Prototype: Built a fully functional smart glasses prototype that integrates GPT-powered voice assistance, real-time health monitoring, and a retractable antenna display.
  • Hardware Design: 3D-printed a custom hinge and arm mechanism featuring a ball-bearing system to securely hold the antenna and display, while using magnets for smooth retraction.
  • Serial Communication: Established serial communication between the Arduino Uno (handling display functions) and the Raspberry Pi (processing GPT responses), ensuring continuous data transmission.
  • Real-Time Data Sensors: Successfully integrated live sensor data to accurately display heart rate, temperature readings, and AI-generated responses.

📓What We Learned

  • Voice Recognition: Gained hands-on experience with voice recognition systems.
  • API Integration: Utilized the OpenAI and Whisper APIs effectively.
  • Hardware & 3D Printing: Developed expertise in hardware design, including 3D printing techniques such as managing print tolerances.
  • Mechanical Design: Explored secure sliding mechanisms for hinge attachments on glasses.
  • Display Technology: Learned to work with SPI TFT displays, focusing on refresh rates, color accuracy, and UI design.
  • Inter-Device Communication: Implemented serial communication between the Arduino Uno and Raspberry Pi.

🕶️What's Next For LiveSight

  • Voice Activation: Introduce a “Hey GPT” trigger for voice-activated commands.
  • Audio Integration: Add audio output capabilities.
  • Enhanced Functionality: Integrate a camera to expand the glasses’ features.
  • Improved Sensor Placement: Optimize the heart sensor’s position (near the temple) for better accuracy.
  • Advanced UI: Develop a text scrolling feature to accommodate longer responses.

Built With

Share this project:

Updates