OSU Dining Agent — Say it. See it. Eat it.
A voice-first dining assistant that understands budget, diet, and mood — and lets you add or order items instantly by ID.
Inspiration
On campus, food decisions are messy: What’s open now? What fits my budget? Can I just say it and have it ordered? We wanted a dining agent that speaks human: “spicy vegan under $12”, “add 23”, or “order 23” — and makes it happen, fast.
What it does
• Smart Suggestions: Ask for “vegan bowls under $12” and get ranked menu cards with visible IDs (e.g., #23).
• Add by Voice or Text: Say/type “add 23” or “order 23 and 45” — it parses multiple IDs and acts immediately.
• Instant Checkout: “order 23” skips straight to payment; “order items in the cart” checks out what you’ve added.
• Web Search (separate page): “healthy thai open now” → nearby places with rating, price, distance, Open in Maps, and menu links.
• Clean UX: Minimal chat, crisp cards, checkout modal (add → pay → redirect), and clear sign-in prompts when needed.
How we built it
Stack & Architecture • Django + Django REST Framework for APIs: /api/agent/ (suggestions), /api/cart/ (add/remove), /api/create-checkout-session/ (Stripe), /api/websearch/ (nearby places). • TailwindCSS for the chat UI, cards (with IDs), and a three-step checkout modal. • MySQL (Docker) with Django ORM filtering + ranking. • Browser Web Speech API for voice input with a one-shot guard to prevent duplicate triggers. • Stripe Checkout for secure payment workflows; django-allauth (Google) for quick sign-in.
NLU (Hybrid)
• Rule-first for reliability/latency: budgets (“under $12”), cuisine/diet/features, and ID commands (order/add/buy/get #?\d+).
• LLM assist : Gemini converts “healthy thai open now, $$, 2 km” into structured filters used by Tavily.
Ranking (Menu Suggestions) s_i \;=\; \alpha\,\text{popularity}_i \;+\; \beta\,\text{sim}!\big(\text{tags}_i,\ \text{prefs}\big) \;-\; \gamma\,\text{allergen_penalty}_i We maximize s_i for fast, relevant, and safe top-N results.
Key UX Details
• Item IDs shown everywhere (cards + mini cart) to make voice adding unambiguous.
• Mic button inside the text box for quick, familiar access.
• Web Search toggle near the “Dining Assistant” header to jump to the separate discovery page.
Challenges we ran into
• Model/version pitfalls: early 404 using unsupported Gemini method; fixed by switching to a supported model + method pair.
• Voice duplication: some browsers fired onresult multiple times; solved with a one-shot guard.
• Regex edge cases: disambiguating “under 12” (budget) from “add 12” (ID) required verb-gated number parsing.
• Django wiring: TemplateDoesNotExist, missing imports, URLConf mismatches — standardized urls.py and views to resolve.
Accomplishments that we’re proud of
• Frictionless voice ordering by ID, including multi-ID commands.
• Clarity by design: visible IDs, consistent cards, and transparent checkout with clear error/login states.
• Hybrid NLU: rule engine for speed and determinism + LLM intent help where ambiguity actually exists (web search).
What we learned
• Concrete UX beats over-modeling: showing IDs and tightening copy reduced confusion more than fancy intent heuristics.
• Contracts over prompts: strict JSON schemas from the LLM kept the web search pipeline robust.
• Resilience builds trust: location fallbacks, timeouts, and explicit messages turn flaky situations into recoverable ones.
What’s next for OSU Dining Agent
• Opt-in personalization: re-rank by tastes/diets, learn favorites.
• Nutrition & allergens: surface macros and enforce hard excludes.
• Menu ingestion (RAG): parse PDFs/images into structured items.
• Maps & booking: inline map previews, table reservations.
• Mobile PWA: “reorder last”, push notifications (“order ready”).
• Order Tracking
Log in or sign up for Devpost to join the conversation.