Inspiration
The genesis of Aural AI was rooted in a personal experience that took me by surprise—a behavioral question during an interview that I found particularly challenging. This moment of vulnerability sparked the idea: What if there was a way to prepare more effectively for these unpredictable moments? Thus, Aural AI was born, designed to harness the power of machine learning to provide a comprehensive training platform for job seekers.
What it does
Aural AI offers a unique blend of speech recognition and artificial intelligence to critique and improve users' interview responses. By analyzing aspects like word choice, speech patterns, and thematic relevance, our platform provides tailored feedback to help users refine their answers, enhance their communication skills, and ultimately, boost their confidence for real interview situations.
How we built it
Aural AI integrates speech-to-text software with a sophisticated machine learning model. Users speak their answers to behavioral interview questions, and our system not only transcribes these responses but also analyzes them for content richness, delivery style, and confidence. The core of our project is a bespoke machine learning model, which was meticulously trained to understand and evaluate the nuances of effective communication. This model is the heartbeat of Aural AI, enabling personalized feedback that guides users toward improved performance.
Challenges we ran into
Fine tuning the model and then deploying it presented a significant challenge, primarily driven by difficulties in deploying the LLM after extensive fine-tuning. The switch required not only technical adjustments for integration and optimizing OpenAI's capabilities within our infrastructure but also a strategic pivot in our approach to analyzing and providing feedback on users' responses. Despite these hurdles, adopting OpenAI significantly advanced our application, enhancing our feedback's precision and depth, and affirmed our dedication to leveraging the best technology to support job seekers.
Accomplishments that we're proud of
We were able to properly get the webcam and transcription working.
What we learned
We learned how to work with next.js and how to fine tune a model even though we were unable to deploy it. We also learned about the Open AI api key.
Embarking on the Aural AI project was both a technological and personal journey. I delved into the intricacies of machine learning, exploring how AI could understand and critique the content and delivery of speech. The process illuminated the vast potential of AI in personal development, extending beyond mere data analysis to becoming a personalized coach. This journey was not just about building an application but also about understanding the intersection of technology and human communication.
What's next for Aural Ai
Built With
- flask
- next.js
- open-ai-api
Log in or sign up for Devpost to join the conversation.