iOS app that scans restaurant menus using OCR and spatial layout analysis, then displays food images for each dish.
Tech Stack: SwiftUI + FastAPI + Redis + Google Custom Search + Apple Vision Framework
- Native camera with zoom (.5x/1x) and flash controls
- Spatial layout analysis that understands menu structure (multi-column, centered, boxed layouts)
- Structured menu view with sections, items, descriptions, prices, and modifiers
- Scan history with rename and swipe-to-delete
- On-device OCR via Apple Vision Framework — no menu images sent to the backend
- Redis caching for fast image lookup
- macOS with Xcode 15.0 or later
- iOS 18.4+ device or simulator
- Apple Developer account (for physical device testing)
git clone https://github.com/yourusername/menui.git
open menui/Menui.xcodeproj- Select the Menui target in Xcode
- Go to Signing & Capabilities and choose your development team
- Run on a simulator or device (Cmd+R)
The app is pre-configured to use the production backend — no additional setup required.
Local backend: To run the backend locally, see backend/README.md (if present) or set GOOGLE_API_KEY, GOOGLE_SEARCH_ENGINE_ID, and REDIS_URL in backend/.env, then run uvicorn app.main:app --reload. Switch the base URL in Menui/Services/APIService.swift to http://localhost:8000.
- All OCR processing happens on-device
- No menu images are sent to the backend (only dish names for image lookup)
- No tracking, analytics, or user accounts
MIT License — Copyright (c) 2026 Menui
