Inspiration
The project is inspired by the desire to create deeply personalized music experiences by connecting users' emotional states and visual aesthetics to music. It aims to go beyond simple genre selection by analyzing mood and visual cues.
What it does
"Feeling the Vibe" analyzes user-uploaded photos or videos to detect emotions and visual color palettes. Based on this multi-modal analysis, it generates an ultra-personalized music playlist and a detailed "vibe" description.
How we built it
The application was built using React, TypeScript, and Tailwind CSS for the frontend, with a Node.js/Express backend. It integrates face-api.js for real-time emotion detection, a custom color analyzer, and optionally OpenAI for enhanced music curation.
Challenges we ran into
Integrating complex AI/ML models like face-api.js and developing a robust color analysis system presented significant technical challenges. Ensuring seamless fallback mechanisms when external AI services were unavailable also required careful implementation.
Accomplishments that we're proud of
We are proud of successfully creating a multi-modal AI analysis system that combines facial emotions, color psychology, and user preferences to generate unique playlists. The intuitive and visually appealing user interface that enhances the overall experience is also a key accomplishment.
What we learned
We gained valuable experience in integrating diverse AI/ML technologies into a cohesive full-stack application. The project also highlighted the importance of designing flexible architecture to handle both advanced AI capabilities and reliable fallback options.
What's next for Feeling The vibe
Future plans include integrating directly with popular music streaming services for seamless playlist playback and further enhancing the AI models for even more nuanced emotional and visual analysis. We also aim to expand user profiling for deeper personalization.
Log in or sign up for Devpost to join the conversation.