Inspiration

As pet owners ourselves and from talking to other pet owners, we realised that pet owners often don’t know whether a symptom is minor or life-threatening. They also find it hard to connect with other pet owners who have gone through a similar issue. We kept thinking there had to be a better in-between, something that helps people make sense of what they’re seeing before panic sets in. VetAI came from wanting to reduce that anxiety and give both pet owners and professionals information that actually feels helpful and grounded.

What it does

VetAI is a veterinary diagnostic assistant mobile app that uses AI to help both veterinary professionals and pet owners analyze pet health conditions through images and voice. Users can capture a photo of a pet condition and receive a diagnosis with confidence scores, risk levels, and extracted visual indicators like asymmetry, border irregularity, color variation, and diameter. The app supports both Professional Mode (clinical terminology, research citations, detailed protocols) and Consumer Mode (plain English explanations, urgency levels, cost estimates, and emergency auto-detection). It includes hands-free voice interaction, allowing users to speak naturally while the AI responds with text-to-speech. If confidence is low, VetAI asks targeted follow-up questions and provides peer-reviewed veterinary research citations in real time. It also includes community features where pet owners can connect with others facing similar conditions.

How we built it

  • React Native with Expo and TypeScript (Mobile App)
  • Python backend (FastAPI)
  • Anthropic's Claude Sonnet 4 model (image analysis and conversational reasoning)
  • OpenAI’s Whisper (voice integration, speech-to-text)
  • Perplexity Sonar Pro (fetch peer-reviewed veterinary research citations in real time)

The app includes camera capture, real-time analysis screens, confidence visualizations, animated UI components, and tab-based navigation.

Challenges we ran into

One of the biggest hurdles we ran into was the lack of accessible, reliable veterinary data for our specific use case. We initially assumed there would be usable public datasets available, but most of what we found was either too limited, poorly structured, or not relevant to what we were building. So we had to create our own dataset by sourcing images of pet diseases online. We also reached out directly to pet owners to understand their experiences, symptoms they observed, and the outcomes their pets had. It was a slower, more hands-on process than we expected, but it helped us collect more realistic and nuanced data.

Accomplishments that we're proud of

VetAI is more than just an image classifier. It feels like a real, multi-modal AI assistant. It brings together vision, voice, research citations, tool use, and even community support into one smooth experience. The dual-mode setup makes sure vets get clinical depth while pet owners get clear, understandable guidance that actually helps in the moment. We also built in smart follow-up questions, which makes the results feel more thoughtful and reliable. And on top of that, we shipped a fully working mobile app with polished UI, live confidence visuals, emergency detection, and multi-turn conversation memory all within a hackathon timeframe.

What's next for VetAI

We're looking for pilot partnerships with vet clinics and funding to launch our public beta. VetAI makes pet healthcare accessible, transparent, and connected.

Built With

  • anthropic
  • openai
  • perplexity
  • python
  • reactnative
Share this project:

Updates