Inspiration

Pursuing a master’s degree in Human Centered Design & Engineering has been transformative for me as a designer, inspiring me to create a tool that empowers aspiring designers without access to top programs to continue developing their skills. During my undergraduate years, Erik and I co-founded UX@UW, a student organization dedicated to upskilling early career product designers. Reflecting on our experience, we recognized that the most effective way to upskill is through practice coupled with direct mentorship and feedback. With this in mind, we combined hands-on learning with AI’s capabilities, creating simulations of design decision-making scenarios where AI provides personalized feedback to foster growth in designers.

What it does

The product we’ve developed is an interactive simulation where designers are presented with complex, real-world design challenges that they need to solve. These scenarios cover a variety of design-related problems, encouraging participants to think critically, strategize, and apply their skills in context. After designers submit their solutions, an AI model processes their responses, evaluating both the quality and thought process behind their decisions.

The AI then generates a comprehensive assessment, providing not just an overall score but also personalized feedback. This feedback highlights areas where designers excelled, offering validation of their strengths, while also pinpointing specific opportunities for improvement. The suggestions are tailored to the individual, offering actionable advice on refining their approach, identifying gaps in their problem-solving, and enhancing their design thinking. This cycle of practice, feedback, and targeted improvement aims to replicate the kind of mentorship and iterative learning that’s typically only accessible through high-level design programs, providing a unique growth opportunity for designers at all stages.

How we built it

We built this product using a combination of React, Javascript, and the OpenAI API. React serves as the frontend framework, enabling an interactive and user-friendly interface where designers can engage with the simulation seamlessly. Javascript powers the backend, handling the business logic, data management, and integration with external services, while also ensuring smooth communication between the frontend and the AI model.

The core functionality—processing design submissions and generating feedback—is driven by the OpenAI API. To maximize the quality of AI feedback, we crafted a highly detailed instructional prompt. This prompt was developed after numerous discussions with experienced design mentors in our personal network, who provided invaluable insights into what constitutes effective feedback for growing designers. We used their expertise to ensure that the AI’s feedback is relevant, constructive, and directly applicable to designers’ skill improvement. By incorporating real-world knowledge from these mentors, we were able to align the AI’s output closely with the standards and expectations of professional design critique, making the experience as beneficial as possible for participants.

Challenges we ran into

One of the challenges we faced was overcoming false positives in our AI model’s analysis. Initially, the model would incorrectly interpret certain aspects of the designers’ submissions, assigning positive scores to responses that didn’t meet the criteria. This led to inaccurate feedback that could mislead users.

To address this issue, we revised our prompt by incorporating explicit example cases where the model should assign a score of zero. These examples included scenarios where the designer’s response lacked critical thinking, failed to address key parts of the problem, or demonstrated a misunderstanding of fundamental design principles. By clearly defining these situations within the prompt, we guided the model to better distinguish between acceptable and unacceptable responses.

This refinement significantly reduced the occurrence of false positives, making the AI’s assessment more reliable and accurate. Designers now receive feedback that more accurately reflects the quality of their work, ensuring that the suggestions for improvement are both relevant and actionable. This iterative tuning process has been instrumental in enhancing the overall effectiveness and credibility of the AI-generated evaluations.

Accomplishments that we're proud of

This is the first AI product any of us have ever built, and it’s incredibly rewarding to see it come to life! The journey from concept to reality has been an intense and exciting one, especially since we managed to design and build this entire system within just a few months. The challenge of venturing into AI development for the first time pushed us out of our comfort zones, requiring us to learn new technologies and approaches on the fly.

We navigated everything—from crafting effective AI prompts to integrating advanced machine learning APIs into a full-stack application—all while ensuring the user experience remained intuitive and engaging. Seeing our product evolve from initial sketches and brainstorming sessions to a functional, impactful tool has been a powerful learning experience. It’s amazing to witness the tangible outcome of our efforts: a product that combines AI with thoughtful design to genuinely help other designers grow.

What we learned

We learned that AI is capable of effectively analyzing human intelligence, but its accuracy and usefulness are directly tied to the quality of the criteria defined within its prompt. The AI model’s performance depends on the precision and clarity of the instructions provided to it, which makes prompt engineering a crucial part of this development process.

Ideally, we would have built or fine-tuned our own model specifically tailored for this use case, as that would offer greater control over the model’s behavior and allow it to better meet our unique needs. However, due to a lack of funding, that wasn’t an option for us. Instead, we focused our efforts on prompt engineering and made several iterations to optimize the model’s output.

Our initial prompt was relatively simple, around 600 characters long, which provided a general structure for evaluating design submissions. However, as we observed the AI’s responses and identified gaps in its understanding—especially with the inherent subjectivity in evaluating design quality—we realized the need for more comprehensive instructions. Over time, our prompt evolved into something far more detailed, now extending beyond 7,000 characters. It includes numerous specific examples, covers a variety of edge cases, and provides step-by-step instructions to help the model evaluate different aspects of a designer’s work accurately. This detailed prompt helped the AI better understand the nuances of design assessments, significantly improving the quality of its analysis.

This journey showed us that, even with off-the-shelf models, AI can be effectively leveraged to assess something as subjective and nuanced as design skills. It also inspired us to consider broader implications. If AI can do a good job assessing the complexities of design, there are many other disciplines where AI could play a transformative role in analyzing and enhancing human intelligence.

What's next for DesignScout

We’ve brought this project to a stage where it’s ready for real-world testing, and we couldn’t be more excited about the journey ahead. Next week, we’re launching an exclusive beta for the design community at the University of Washington, allowing designers to experience DesignScout firsthand and provide us with valuable feedback to help shape the product. This will be a crucial phase in validating our solution and understanding how well it meets the needs of design students and professionals.

Beyond the beta, we’ve also made strides in connecting with broader opportunities. We submitted DesignScout to the Accenture Startup Program, and just last week, we were thrilled to be invited to pitch at AWS re:Invent in Las Vegas this December. This is a huge opportunity for us to share our vision on a major stage, connect with potential partners, and gather insights from industry leaders.

In parallel, we’ve begun developing a second project to expand our offerings—a tool designed to give designers personalized feedback on their online portfolios. Built on an AI agentic framework, this new product aims to help designers improve the way they present their work online. Designers will simply provide a link to their portfolio, and our AI will analyze its structure, content, and presentation, delivering personalized recommendations for enhancement. Our goal is to empower designers to better communicate their skills and achievements, ultimately helping them stand out to potential employers.

These next steps are all about expanding our impact, gathering feedback, and demonstrating the potential of AI in transforming design education and recruitment. We’re eager to see how DesignScout evolves with real user insights and are optimistic about what’s to come!

Built With

Share this project:

Updates