Check out SeaQuest here
Inspiration
While waiting in line for food, we came across a TikTok trend where people were doing tricky hand exercises meant to stimulate the brain and help prevent cognitive decline and dementia. That idea sparked something for us. We wanted to take the concept of quick, brain-engaging challenges and turn it into something more interactive and competitive.
That’s how SeaQuest was born – a fast-paced, two-player game where competitors go face to face in a series of quick mini brain challenges. Each round is designed to be engaging, reactive, and mentally stimulating, combining focus, speed, and friendly competition with an aesthetically pleasing underwater mission theme.
What it does
SeaQuest is a two-player co-op web game featuring a series of “quests” based on reaction speed and accuracy. Players compete against each other to complete cognitive tasks that show up on the screen in a randomized order by clicking a series of keys (either clicking Y for yes or N for no, typing in certain letters, or pressing or not pressing the space bar). In this race to the bottom of the ocean, the fastest total reaction time wins! Not only is it fun, but our games are also backed by cognitive science.
How we built it
Our development was a highly iterative process divided into design and execution: Frontend & Design: We started with pure ideation and drafting our Product Requirements Document (PRD) to get our vision aligned. Next, we built lo-fi prototypes in Figma. From there, we moved into the most important phase of our project: usability testing. Because we were unsure how players would actually perceive the UI and mechanics, we tested our lo-fi flows on hackathon participants and took detailed notes to identify and mitigate points of confusion. Fixing these friction points took a ton of effort and intense team communication, but talking it out allowed us to massively improve the user flow before finalizing our hi-fi.
Backend & Logic: We used a combination of agentic AI tools like Cursor, Gemini, and Claude Code to provide a base for our features, and then we iterated on our features for improved functionality by breaking up prompts into smaller and more tangible tasks for each agent. Our developers collaborated to build out the core flow: the introduction, tutorial, the main subgames, a winner dashboard, and the scientific explanations for each puzzle.
Challenges we ran into
Narrowing the project scope: We debated between creating a fun, idle "chill game" or a strict cognition challenge, but realized that we could deliver a unique product by targeting the middle of these two ideas. Balancing design vision with user needs: We had to balance what we originally planned to implement with what users actually needed. For example, we initially designed the gameplay screen to be minimalistic, with no instructions. However, usability testing showed that users couldn’t remember the tutorial well enough to complete tasks successfully. As a result, we revised the design to include short, concise prompts—less detailed than the tutorial, but enough to guide users effectively.
Learning agentic AI: This was a big learning curve because none of us had ever tried vibe coding with more advanced engines. Setting things up, learning to prompt effectively (within our token limits), getting design approvals, and reiterating was a challenging process we overcame—and we ended up having a lot of fun with it.
Accomplishments that we're proud of
Effectively implementing usability testing: We didn’t know what would or wouldn't work for the user, so we conducted rounds of usability testing with 20+ users. Based on their feedback and conversations with our designer, we were able to vastly improve our usability.
Aesthetic appeal: Incorporating hand-drawn assets, a cohesive color theme, and visually appealing layouts using React made our project look amazing.
Deploying the app! This was a first for most of our team. It had been one of our main goals from the start, so actually launching it felt both exciting and meaningful. We wanted users to be able to play the game on their own computers with minimal bugs, rather than just interacting with a pre-engineered demo.
What we learned
How to effectively use Agentic AI (Cursor). The product development process from start to finish (brainstorming, PRD, wireframes, full-stack iteration, deployment).
Communicating our ideas with each other and the agents is hard! Explaining individual features and how they worked together, scoring systems, and UI took effort, but we learned how to truly work together as a team.
What's next for SeaQuest
Multiplayer expansion: Adding a lobby using WebSockets for cross-device compatibility. Leaderboards: Tracking player usage to showcase their cognitive improvement over time.
Prize Categories Description
UI/UX - We heavily prioritized the user centered design process with multiple iterations of wireframes and a detailed PRD in the beginning. Next, we hand drew assets as graphics, and conducted usability testing on 20+ users to redefine features that weren't as intuitive as we expected. The overall effort in this project demonstrates that of a UI/UX production level mindset and it motivated us for this track.
Neuro Track - The inspiration for a cognitive science game came from a TikTok trend about finger exercises to prevent cognitive decline and dementia. Based off of this original idea, SeaQuest modelled games from four cognitive science concepts that help researchers determine brain function: Corsi Block Test, Stroop effect, cognitive load, and Go/NoGo task. We altered the tasks only on the frontend, preserving their original functionality to ensure this game was a fun, speed and accuracy based competition that users realize the benefit of through our "Learn the Science" feature. We believe that this aligns best with Neuro in its creativity of modeling and competitiveness.
Built With
- claude
- cursor
- figma
- gemini
- github
- love
- matcha
- react
- typescript

Log in or sign up for Devpost to join the conversation.