About the Project
Track:
Productivity & Collaboration Tools
AI Assistants & Automation
How It Fits the Tracks
Productivity & Collaboration Tools: AWSpeak boosts interview readiness by helping users practice structured communication, reflect on their experiences, and improve under realistic pressure, making it a productivity tool for career development and self-improvement.
AI Assistants & Automation: The app acts as a smart assistant, automating mock interviews with voice-driven interactions and AI-generated feedback. It streamlines the coaching process using advanced LLMs, speech synthesis, transcription, and rubric-based evaluation, all without a human interviewer.
What We Built
AWSpeak is a web application that delivers hyper-realistic mock interviews using advanced AI models, tailored to reflect Amazon’s leadership principles. Users experience voice-driven, dynamic interviews where they respond to behavioral questions based on a specific job description. After the session, they receive detailed, AI-generated feedback that highlights strengths, weaknesses, and actionable tips — complete with a downloadable transcript.
What Inspired Us
As students preparing for internships and full-time roles, we often found interview prep to be generic, repetitive, or inaccessible. Most platforms don't simulate real pressure or tailor their questions to the job at hand. We wanted to create something better:
- A tool that adapts to the role and company values
- One that uses voice, not just text, to simulate pressure
- And one that gives detailed, fair, and personalized feedback
We especially wanted to help those who struggle with thinking on the spot or don’t have easy access to mock interview partners.
What We Learned
- How to orchestrate multiple AWS services (Bedrock, Polly, Transcribe, S3) into one seamless pipeline
- How to prompt-tune LLMs for realistic persona-driven dialogue and rubric-based feedback
- How to handle real-time voice input/output in the browser and sync it with back-end processing
- The importance of state management, async flows, and error handling in React and Flask when building AI-powered apps
How We Built It
- Frontend: Built with React (Next.js) and Tailwind CSS, capturing audio, rendering dynamic UI states, and integrating real-time feedback
- Backend: Powered by Flask and AWS SDKs, handling audio uploads, AI interactions, and response evaluation
AI Services:
- AWS Bedrock: For generating interview questions and evaluating transcripts
- AWS Polly: For converting questions into lifelike speech
- AWS Transcribe: For real-time voice transcription
- Amazon S3: For storing audio files securely
Challenges We Faced
- Ensuring smooth integration between client-side audio recording and server-side AWS pipelines
- Making Polly output sound natural and non-monotonous
- Handling async race conditions in frontend recording and playback logic
- Prompt engineering for feedback that is consistent, unbiased, and aligned with Amazon's values
- Making the system flexible enough to easily swap in other companies’ values or evaluation criteria
Built With
- bedrock
- flask
- nextjs
- polly
- postman
- python
- react
- s3
- tailwind
- transcribe
- typescript


Log in or sign up for Devpost to join the conversation.