Inspiration
Public speaking anxiety is one of the most common fears people experience, yet it’s rarely practiced in a safe and controlled environment. I was inspired by the idea that virtual reality could simulate not just the physical setting of standing on a stage, but the internal psychological experience of anxiety itself. Rather than simply placing the user in front of an audience, we wanted the environment to respond dynamically to rising stress cues like visually, physically, and auditorily to create a more immersive and emotionally realistic experience.
What it does
SpeechVR places the user on a virtual stage in front of an audience and simulates escalating anxiety through multiple stages. As anxiety increases, the light intensity changes, peripheral vision narrows using a vignette overlay, ringing audio begins to play, and camera shake is introduced to simulate physical instability. The experience includes randomized audience questions that can increase anxiety unpredictably. To help regulate stress, the user can perform a guided breathing exercise using the controller trigger. Completing a full breath reduces anxiety by one level, modeling a simple but effective emotional regulation loop. The system transforms anxiety from a passive concept into an interactive mechanic.
How we built it
I built SpeechVR in Unity using XR Plug-in Management for Oculus integration and proper headset tracking. Controller input was implemented using Unity’s XR input system to detect button presses and trigger holds. Anxiety states were modeled as discrete levels (0–3), each mapped to lighting intensity, overlay opacity, audio volume, and camera shake values. Smooth transitions were implemented using interpolation functions to avoid abrupt environmental changes. Camera shake was generated using Perlin noise to create natural, non-repetitive motion. The breathing mechanic was implemented using a radial UI fill element that visually closes as the user holds the trigger, reinforcing the inhale-exhale simulation. The intro sequence was scripted to automatically walk the user onto the stage before the experience begins.
Challenges we ran into
One of the biggest challenges was configuring VR correctly across multiple rebuilds and platform switches. I encountered package conflicts, XR initialization issues, Android build errors, and input system mismatches that required rebuilding the project from a clean environment. Head tracking issues were particularly difficult to debug, as they were related to XR Origin configuration and loader initialization rather than script logic. Another challenge was balancing immersion without overwhelming the user like tuning lighting, overlay intensity, ringing volume, and shake amplitude required careful iteration to maintain realism while keeping the experience usable.
Accomplishments that we're proud of
I'm proud of successfully creating a system where anxiety is not just implied, but mechanically simulated. The combination of lighting changes, tunnel vision effects, procedural shake, spatial audio, and interactive breathing creates a layered experience that feels immersive and responsive. We’re also proud of overcoming significant technical hurdles to get the project running smoothly on standalone VR hardware. The final build includes automated stage entry, randomized audience questions, dynamic anxiety escalation, and a fully functional regulation mechanic. It was a nice bonus accomplishment to learn unity and implement this in one day as well.
What we learned
This project taught me how sensitive VR systems are to proper configuration, especially regarding XR initialization and input handling. We learned the importance of separating world movement (XR Origin) from headset tracking (camera transform) and how small parameter changes in lighting, audio, and motion can significantly impact immersion. Beyond technical skills, we also learned how interactive systems can model psychological states in meaningful ways. Emotional simulation requires both engineering precision and thoughtful design.
What's next for SpeechVR
Next steps include expanding the audience with more diverse behaviors and reactions, adding speech detection so anxiety responds to pauses or filler words, and implementing performance feedback after each session. I would also like to incorporate adaptive difficulty, where the system adjusts question frequency and environmental intensity based on user behavior. Ultimately, SpeechVR could evolve into a practical training tool for students, professionals, and anyone looking to build confidence in public speaking.
Log in or sign up for Devpost to join the conversation.