Inspiration
Our inspiration was to create an experience with Virtual Reality and Vocal Recognition.
What it does
Our game is a Virtual Reality Shooter that allows the player to use special abilities triggered by voice commands.
How we built it
Our project was built in Unity using available assets which we integrated with the Oculus Rift to provide a truly immersive experience.
We also utilized Unity's Windows Speech library to integrate vocal recognition.
Made use of open source audio files, textures, and effects for various mechanics that enrich the user's experience.
Challenges we ran into
Initially, we wanted to create a two-player experience with Virtual and Vocal Recognition, however, due to technical difficulties as well as limitations of the Oculus Hardware, we were not able to proceed with that idea.
Some other technical challenges that occurred include that of deprecated or incompatible codebases that were not optimized in any shape or form for the Oculus.
Also at the beginning, we truly struggled to come up with and commit to any captivating ideas.
Lastly, the inconvenient wired internet logistics.
Accomplishments that we're proud of
Transitioning the game from a 2D orthographic perspective to an immersive Virtual Reality first-person perspective.
The drastic modifications made from the preexisting included project environment to the end project environment.
Creating the player's special abilities from scratch that fully utilize Vocal Recognition.
What we learned
How to integrate Vocal Recognition into Unity. How not to setup multiple displays in Unity...
Work with the Oculus and Unity.
What's next for BunnyWatchVR
General refinement and refactoring of current mechanics. Addition of more abilities, features, and two player support.
Log in or sign up for Devpost to join the conversation.