A mobile-first web app that identifies flowers using machine learning. Take a photo of a flower and get instant identification with confidence scores.
- Camera Integration - Uses device camera with back-camera preference for mobile
- Real-time ML Inference - Runs entirely in the browser using TensorFlow.js
- Offline Capable - PWA-ready with no server-side processing required
- 5 Flower Types - Identifies daisy, dandelion, rose, sunflower, and tulip
| Layer | Technology |
|---|---|
| Framework | Next.js 16 (App Router) |
| Language | TypeScript |
| Styling | Tailwind CSS 4 |
| ML Runtime | TensorFlow.js |
| Model | MobileNetV2 (transfer learning) |
| Testing | Jest + React Testing Library |
| Hooks | Husky (pre-commit CI) |
┌─────────────────────────────────────────────────────┐
│ Browser │
├─────────────────────────────────────────────────────┤
│ Camera API → Canvas Crop → TensorFlow.js │
│ ↓ ↓ ↓ │
│ Video Stream Square Image MobileNetV2 │
│ (282×282px) Predictions │
└─────────────────────────────────────────────────────┘
The model runs entirely client-side:
- Camera captures video stream
- On capture, image is cropped to the viewfinder circle
- TensorFlow.js preprocesses and runs inference
- Results display with confidence percentages
# Install dependencies
pnpm install
# Run development server
pnpm dev
# Open http://localhost:3000| Command | Description |
|---|---|
pnpm dev |
Start development server |
pnpm build |
Production build |
pnpm test |
Run Jest tests |
pnpm ci |
Type check + lint + test |
The model is pre-trained and included in public/tfjs_model/. To retrain:
cd training
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run training (~5 minutes on CPU)
python train.py
# Copy output to public directory
cp tfjs_model/* ../public/tfjs_model/Training uses the TensorFlow Flowers dataset (3,670 images across 5 classes) with MobileNetV2 transfer learning.
├── public/
│ ├── tfjs_model/ # TensorFlow.js model files
│ └── manifest.json # PWA manifest
├── src/
│ ├── app/
│ │ ├── page.tsx # Home page
│ │ └── camera/page.tsx # Camera + identification
│ ├── components/
│ │ ├── CameraView.tsx # Viewfinder with circle guide
│ │ ├── ResultCard.tsx # Prediction results display
│ │ └── ...
│ ├── hooks/
│ │ └── useCamera.ts # Camera access + capture
│ └── lib/
│ ├── model.ts # TensorFlow.js inference
│ └── flowers.ts # Flower metadata
└── training/
├── train.py # Model training script
└── requirements.txt # Python dependencies
- Base Model: MobileNetV2 (ImageNet pretrained)
- Input Size: 224×224×3
- Output: 5-class softmax (daisy, dandelion, roses, sunflowers, tulips)
- Size: ~8.7MB (TFJS graph model)
- Accuracy: ~87% on test set
MIT