ShadeRoom is an advanced web application that allows users to edit and manipulate room images using AI-powered segmentation. The application enables users to select specific areas of a room image and apply different colors or textures to them, making it easy to visualize interior design changes without physically altering the space.
- AI-Powered Image Segmentation: Utilizes the Segment Anything Model (SAM) to intelligently identify and select areas in room images
- Multiple Selection Tools:
- Lasso tool for freeform selection
- Polygon/Pen tool for precise manual selection
- Hover mode for quick previews
- Color and Texture Application: Apply various colors and textures to selected areas
- Responsive Design: Works on both desktop and mobile devices
- Real-time Preview: See changes instantly as you edit
- Multi-step Workflow: Guided journey through the editing process
- Image Processing: Handles image compression and manipulation
ShadeRoom is built using a modern React architecture with the following key components:
- React: UI library for building the user interface
- Vite: Build tool for fast development and optimized production builds
- React Router: For navigation between pages
- React Konva: Canvas manipulation for image editing
- Tailwind CSS: For styling and responsive design
- Context API: Multiple context providers for different aspects of the application:
frontend/– React + Vite frontend (UI, canvas editing, in-browser ONNX models)backend/– FastAPI service for generating image embeddings and providing protected endpointspublic//assets/– static assets and pre-bundled ONNX decoder files
- AI segmentation with SAM (Segment Anything Model) loaded in the browser
- Multiple selection modes: lasso, pen/polygon, hover
- Apply colors and textures with blend modes and perspective transforms
- Multi-step guided workflow with
StepperProvider - Export/share final image from the UI (download, copy link, native share)
- Open a terminal and go to
frontend/:
cd frontend- Install dependencies:
npm install- Create a
.envfile infrontend/(example):
VITE_APP_BACKEND_URI=http://localhost:8000
VITE_APP_BACKEND_KEY=<your-api-key>
VITE_APP_HOSTNAME=localhost- Start dev server:
npm run dev- Open the app at the port printed by Vite (usually
http://localhost:5173).
- Create and activate a Python environment (example using uv):
cd backend- Install dependencies:
uv sync- Configure
.envinbackend/(example):
MODEL_PATH=models/sam_vit_h_encoder.onnx
API_TOKEN=<your-api-key>
CORS_ORIGINS=["*"]- Run the server (development):
fastapi dev main.pyEndpoints:
GET /health— health check (rate-limited)POST /get-embedding— upload an image (multipart/form-dataimage) to receive raw tensor bytes. Requiresx-api-keyheader.
Notes:
- The backend preloads the ONNX encoder at startup to reduce inference latency.
- Ensure the
MODEL_PATHpoints to a compatible ONNX model inbackend/models/.
- The frontend can generate a data URL of the final image and use the Web Share API to share a blob (native share) when available.
- Segment Anything Model (SAM)
- ONNX Runtime
- React Konva