Inspiration
FixItXR was inspired by the idea that car repairs should feel intuitive rather than confusing. We wanted to replace paper manuals and guesswork with clear, mixed-reality guidance. Using the Meta Quest 3’s cameras and spatial tracking, we envisioned an assistant that overlays labels and instructions directly onto the real engine bay, helping anyone perform repairs with confidence and clarity.
What it does
FixItXR automatically labels the components inside the engine bay and overlays clear, labels on each part. It then displays step-by-step tutorials showing the user exactly how to perform maintenance tasks or repairs. Using the Meta Quest 3’s cameras and spatial tracking, the system aligns instructions with real components, making each repair easy to follow.
How we built it
We built FixItXR using a combination of mixed reality tools, AI models, and spatial computing. Unity, together with the Meta Quest 3 passthrough and MR SDK, handled real-time spatial tracking, scene understanding, and anchoring virtual labels onto real engine components. Using 169 photos, we trained a Roboflow computer vision model capable of detecting key parts inside the engine bay. OpenAI powered the adaptive step-by-step guidance, reasoning, and TTS instructions that respond to the user’s actions. By combining these systems, we created a seamless workflow that scans the environment, identifies components, and delivers clear, interactive repair tutorials.
Challenges we ran into
One of the biggest challenges was getting object detection to work reliably in 3D space. Existing pre-trained models worked in 2D images but failed when used in mixed reality, so we trained our own Roboflow model from 169 photos to recognize engine components more accurately. Another challenge was integrating multiple systems—computer vision, Unity MR interactions, UI flow, and AI-driven tutorials—into one smooth pipeline. We also had to carefully coordinate timing between detection events, spatial anchoring, and step-by-step instructions to ensure the experience felt stable and responsive inside the headset.
Accomplishments that we're proud of
We're proud that our custom-trained model consistently labels engine components correctly with no noticeable misidentification, even in a real mixed-reality environment. We successfully built clear, structured tutorials that guide users through each step, supported by smooth text-to-speech instructions that make the experience accessible and hands-free. Bringing all these elements together—accurate detection, stable MR anchoring, and an intuitive AI-driven tutorial system—felt like a major milestone for our team.
What we learned
We learned a lot about training and refining computer vision models, especially how much data and iteration it takes to get reliable results. We also gained experience debugging complex MR interactions and solving technical challenges quickly as a team. Most importantly, we built strong friendships and learned how to collaborate effectively under pressure, bringing together different skills to create something meaningful in a very short time.
What's next for FixItXR
We plan to expand FixItXR to cover more car systems beyond the engine bay—including brakes, suspension, electrical components, and interior repairs. We also plan to add real-time error tracking to detect and correct user mistakes before they cause damage, preventing issues like loosening the wrong bolt or skipping safety steps. Additionally, we want to add multilingual support to make automotive repair accessible globally, delivering instructions in users' native languages. Our vision is to build a comprehensive platform that empowers anyone to confidently maintain and repair their vehicle, regardless of experience level.
Log in or sign up for Devpost to join the conversation.