Neuro-Sync: Autonomous Cognitive Orthotic & Preventive Health System

💡 Inspiration

Axxess challenged us to build an "AI-driven preventive health partner" that tracks data and spots issues before they become problems. When looking at chronic care, we realized that for knowledge workers, students, and especially individuals with neurodivergence like ADHD, managing cognitive load is a broken loop.

We watched our friends struggle with "invisible burnout"—where stress and mental fatigue accumulate silently until a sudden crash occurs. The core issue is Executive Dysfunction: realizing you are distracted and forcing yourself back on task requires "Executive Function," the exact biological resource depleted by stress. Existing tools are lagging indicators; they only give data after the mental crash has happened. We built Neuro-Sync to be an external, autonomous Executive Function—a system that reads your biological state, predicts burnout, and dynamically adapts your digital environment to protect your mental health before a crisis occurs.

⚙️ What it does

Neuro-Sync is a Closed-Loop Brain-Computer Interface (BCI) that perfectly maps to the key implementation areas of a modern preventive health partner:

  1. Wearable Integration (Reads the Mind): A wearable BCI headset streams real-time EEG (brainwaves) and PPG (heart rate) to our local Edge Gateway (Raspberry Pi 5).
  2. Predictive Risk Modeling: A custom 1D-CNN and LSTM machine learning pipeline processes the biometric telemetry to classify cognitive states (Deep Work, Scattered, Flow) and predicts mental fatigue/focus crashes up to 30 minutes in advance.
  3. Virtual Assistant & Actuation: A LangGraph-powered Agentic AI acts as a virtual health assistant. If it detects a dopamine-loop (e.g., doomscrolling Reddit with high stress), it autonomously blocks distracting apps to break the cycle.
  4. Smart Alerts & Coaching (Clinical Triage): The system conducts daily wellness chat check-ins (monitoring mental health via speech and behavioral patterns). It generates highly personalized Diet & Lifestyle RAG reports, and proactively alerts a registered caregiver or doctor if severe biometric anomalies are detected—bringing patient and caregiver together.

🪶 Built on Featherless.ai

We firmly believe Neuro-Sync deserves the Featherless.ai prize because our entire "Executive Function" architecture would have been physically impossible to build within a hackathon timeframe without the Featherless unified API. Neuro-Sync doesn't just use an LLM; it utilizes a highly complex Mixture-of-Agents (MoA) and Biometric-Triggered Model Routing pipeline. Because Featherless gives us access to thousands of premium models under a single OpenAI-compatible endpoint, we were able to assign specific models to specific tasks based on their architectural strengths—without managing multiple cloud subscriptions.

Here is how we aggressively utilized the Featherless ecosystem:

  • The Logic Engine (DeepSeek-V3): We used DeepSeek as the core "brain" of our LangGraph state machine. Because of its elite reasoning capabilities, DeepSeek evaluates the real-time biometric JSON stream every 5 minutes and makes the complex decisions required for "Tool Calling" (e.g., deciding whether to block a specific app or initiate an urgent wellness check based on focus degradation).
  • The Interviewer (Qwen2.5-72B-Instruct): For our user-facing chat UI, we needed an LLM that supports fast, low-latency streaming and empathetic dialogue. We routed our chat endpoint specifically to Qwen to conduct our 5-minute clinical triage check-ins without making the user wait for heavy reasoning overhead.
  • The Context Synthesizer (KimiK2): Human biometric data generates massive payloads. At the end of the day, our Node.js backend fetches thousands of rows of EEG telemetry and active window logs. We routed this exclusively to KimiK2 to leverage its massive context window, allowing it to ingest the entire day's data and output a deeply personalized Markdown/PDF daily cognitive journal.
  • The MoA Triage Pipeline: We built a 3-stage handover protocol natively on Featherless. During a wellness check, Qwen conducts the fast user chat. When the chat ends, the transcript is silently passed to DeepSeek to logically diagnose if the user is experiencing severe burnout. If DeepSeek flags an anomaly, the JSON is passed to KimiK2 to draft a secure, clinical alert to the user's doctor. Three distinct LLMs, three specialized tasks, one Featherless API.
  • Zero-Latency Fallback Resiliency: We wrapped our Featherless calls in a dynamic Node.js routing layer. If our primary logic model experiences latency, our code catches the timeout and seamlessly swaps the model string to a lighter open-source model in milliseconds, ensuring the user's Executive Function support never goes offline.

🛠 How we built it

  • Hardware & Edge: We used a Muse 2 headband broadcasting via Lab Streaming Layer (LSL) to a Raspberry Pi 5. The Pi acts as an edge gateway, pushing data to our centralized compute engine.
  • Data Science (Python/Flask): We applied Bandpass (1-50Hz) and Notch filters to clean the raw EEG data, using Fast Fourier Transforms (FFT) to extract Alpha, Beta, Theta, and Gamma bands. Our PyTorch 1D-CNN handles real-time classification, while an LSTM models time-series data for predictive burnout forecasting.
  • Backend (Node.js/LangGraph): A Node.js server orchestrates the application using WebSockets for zero-latency UI updates. It runs LangGraph.js to manage the Featherless AI agent states and connects to MongoDB for time-series memory storage.
  • Frontend: A sleek, medical-grade Next.js + Tailwind CSS dashboard utilizing Recharts for real-time brainwave visualization and floating biometric bar charts.

⚠️ Challenges we ran into

Raw EEG data is incredibly noisy and as unique as a fingerprint. Our initial Kaggle-trained models struggled to accurately classify our specific brain states. To solve this, we performed Subject-Dependent Transfer Learning—we literally recorded a teammate doomscrolling on Reddit for an hour to capture their unique "distracted/stressed" biometric signature, and fine-tuned the model on that specific data.

Additionally, we had to heavily engineer our LangGraph routing to prevent the live 1Hz data stream from spamming the Featherless API, ultimately designing a buffer system that evaluates the user's cognitive state in optimized 3-minute chunks.

🏆 Accomplishments that we're proud of

  • Successfully creating a real-time, wireless Lab Streaming Layer (LSL) bridge from a consumer wearable to a heavy PyTorch compute engine.
  • Orchestrating a true 3-stage Mixture-of-Agents (MoA) pipeline using Featherless.ai.
  • Building a preventive health system that actually acts on data (intervening digitally) rather than just displaying it on a dashboard.

🧠 What we learned

We learned that human biology isn't linear—sleep resets the baseline, meaning we had to rethink how we visualized data (shifting from continuous line graphs to floating min/max charts). We also mastered advanced prompt engineering, learning how to inject hardcoded biological baselines into LLM system prompts so the AI could act as a hyper-personalized medical assistant.

🚀 What's next for Neuro-Sync

  • True Edge AI: Migrating the predictive LSTM model and quantized LLMs directly onto the Raspberry Pi 5 to ensure offline, privacy-first processing without relying on a centralized PC.
  • Physical Hardware Actuation: Integrating an ESP32 microcontroller to deliver IoT haptic feedback (subconscious rhythmic vibrations) to actively guide user brainwaves into flow states via physical entrainment.
  • Predictive EMR Integration: Directly piping our KimiK2-generated lifestyle coaching reports into hospital systems (Epic/Cerner) to complete the remote patient monitoring loop.

Built With

Share this project:

Updates