Skip to content

Superpatrex/InterView

Repository files navigation

What is this?

This is a project made for the KnightHacks 2023 hackathon, authored by Jordan, Jack, August, and Gabe. InterView is an immersive interview experience powered by a unique AI pipeline designed to assess your professionalism, charisma, and proficiency in this one-on-one conversation simulator.

Inspiration

Jack is a mentor at First Step @ UCF and leads his organization in succeeding in their early career. We were inspired by his mentees' desire to hone their networking and interview skills without having to physically meet with recruiters and hiring managers (hi sponsors!).

InterView was the perfect opportunity for us to learn how to integrate AI into our first generative project and serve individuals wanting to practice their interview skills.

What it does

InterView places you in an immersive one-on-one with an interviewer powered by a unique instruction-tuned AI paradigm we conceived ourselves. An operator agent drives the conversation with our user while three supervisor agents assess the conversation using three criteria -- Professionalism, Charisma, and Proficiency. We utilize the Oculus Voice SDK to allow the user to speak to their interviewer and receive text-to-speech back to simulate the flow of verbal conversation, and complete an interview with a final breakdown of the user's performance.

How we built it

We created the VR environment in Unity and C#. August is passionate for making fun and inviting virtual spaces, and we think he really nailed it for InterView. We also implemented three components that we're especially stoked to have pulled off:

We knew that we wanted our user to have the ability to speak to and hear our interviewer -- conversational tempo and tone is crucial for nailing an interview. Jordan found that the Oculus Voice SDK was perfect for this use-case, and made the implementation that allows for us to parse our user's dialogue into text for our agents to work off of.

Our second and third components are part of our AI pipeline. We had the idea that we wanted to quantify the quality of the conversation by criteria that would be impactful to measure for interview feedback. We realized that we could implement a creative solution: We implemented three separate supervisor agents that each gauge a different measurement for each piece of the conversation. We use these measurements to serve the user feedback on different aspects of their dialogue so that they can deliberately improve.

Challenges we ran into

August: The biggest challenge I ran into was getting the physics right for the objects in the scene. There was a lot more to the collision and throw physics than I anticipated.

Gabe: I had a lot more trouble creating the UIs than I thought I might have. I've never used Unity before so it was considerably different from creating a UI in React.

Jordan: The text-to-speech and speech-to-text was not made to work in VR. [Laughs] and Meta used depricated objects to run their voice SDK. Their forums wer also not helpful. So I basically had to rever-seengineer the samples and then reengineer them to work in VR.

Jack: Definitely the speech-to-text, it was a really hard thing. We also ran into a really weird git merge problem. It was also pretty surprising having our API key revoked by OpenAI for publishing it within our Unity project repo.

Accomplishments that we're proud of

August: I'm really happy with how the interviewer turned out, I spent a lot of time creating the assets and animations that contribute to the character's personality!

Gabe: I'm really proud of the instruction-tuned AI agents that we were able to implement. We spent a lot of time tweaking the operator agent's conversation tendencies and the supervisors' rating scales.

Jordan: I'm really happy that we got the text-to-speech and speech-to-text working with the UI. It was an arduous handful of hours leading up to submission that was having us question whether we'd complete the project or not.

Jack: Using generative AI in Unity. I think doing anything with speech-to-text or text-to-speech was really difficult. I'm also really proud of August because our environment for this project turned out way better than our previous one.

What we learned

August: I learned how to tweak facial animations and how to give a character personality.

Gabe: Leading up to the event I had spent the week learning Unity in preparation for a VR project, and I ended up learning more than double the amount during this hackathon.

Jordan: I learned that I never want to work with the OculusVoiceSDK again.

Jack: I learned that Unity hates us [laughs] just generally about Unity itself and how annoying that some softwares are so deprecated that they're barely useful but still left in. I think it's pretty annoying.

Running OpenAI in Unity

  • Make a class with a namespace of OpenAI named UtilityAI
  • Create a static method named GetAIKey that takes no parameters
  • Place the .cs file with the Asserts/Script folder

Major Highlights:

  • 18 Collective hours of sleep
  • 1500mg of caffeiene
  • 1 Entire Frozen II movie watched
  • Three Hours of Subway Surfers and Minecraft Parkour distractions

Our favorite photos from the event:

IMG_4338 IMG_4372 IMG_4391 IMG_4390 IMG_4396 IMG_1879 IMG_20231007_171908_01

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors