Not affiliated with the Homebox project. This is an unofficial third-party companion app.
AI-powered companion for Homebox inventory management.
![]() |
![]() |
![]() |
![]() |
Take a photo of your stuff, and let AI identify and catalog items directly into your Homebox instance. Perfect for quickly inventorying a room, shelf, or collection. Use the AI Chat to manage your inventory, find locations, or update details just by asking.
flowchart LR
A[π Login<br/>Homebox] --> B[π Select<br/>Location]
B --> C[πΈ Capture<br/>Photos]
C --> D[βοΈ Review &<br/>Edit Items]
D --> E[β
Submit to<br/>Homebox]
B -.-> B1[/Browse, search,<br/>or scan QR/]
C -.-> C1[/AI analyzes with<br/>OpenAI GPT-5/]
D -.-> D1[/Edit names,<br/>quantities, tags/]
- Login β Authenticate with your existing Homebox credentials
- Select Location β Browse the location tree, search, or scan a Homebox QR code
- Capture Photos β Take or upload photos of items (supports multiple photos per item)
- AI Detection β AI vision (via LiteLLM*) identifies items, quantities, and metadata
- Review & Edit β Adjust AI suggestions or ask AI to correct mistakes
- Submit β Items are created in your Homebox inventory with photos attached
*LiteLLM is a Python adaptor library we use to call OpenAI directly, no Local AI model required (unless you want to), just your API key.
GPT-5 mini (default) offers the best accuracy. GPT-5 nano is 3x cheaper but may need more corrections. Typical cost: ~$0.30 per 100 items (mini) or ~$0.10 per 100 items (nano).
Prices as of 2025-12-10, using OpenAIβs published pricing for GPT-5 mini and GPT-5 nano.
Before you start, you'll need:
- An OpenAI API key β Get one at platform.openai.com
- A Homebox instance β Your own Homebox server, or use the demo server to test
Compatibility: Tested with Homebox v0.21+. Earlier versions may have different authentication behavior.
Want to try it out without setting up Homebox? Use the public demo server:
docker run -p 8000:8000 \
-e HBC_LLM_API_KEY=sk-your-key \
-e HBC_HOMEBOX_URL=https://demo.homebox.software \
ghcr.io/duelion/homebox-companion:latestOpen http://localhost:8000 and login with [email protected] / demo
# docker-compose.yml
services:
homebox-companion:
image: ghcr.io/duelion/homebox-companion:latest
container_name: homebox-companion
restart: always
environment:
- HBC_LLM_API_KEY=sk-your-api-key-here
- HBC_HOMEBOX_URL=http://your-homebox-ip:7745
ports:
- 8000:8000docker compose up -dOpen http://localhost:8000 in your browser.
Tip: If Homebox runs on the same machine but outside Docker, use
http://host.docker.internal:PORTas the URL.
ARM64/Raspberry Pi: Docker images are built for both
linux/amd64andlinux/arm64architectures.
- Identifies multiple items in a single photo
- Extracts manufacturer, model, serial number, price when visible
- Suggests tags from your existing Homebox tags
- Multi-language support
- Multi-image analysis β Take photos from multiple angles for better accuracy
- Single-item mode β Force AI to treat a photo as one item (for sets/kits)
- AI corrections β Tell the AI what it got wrong and it re-analyzes
- Custom thumbnails β Crop and select the best image for each item
- Browse hierarchical location tree
- Search locations by name
- Scan Homebox QR codes
- Create new locations on the fly
- Configure how AI formats each field (name style, description format, etc.)
- Set a default tag for all detected items
- Define custom Homebox fields with AI instructions (the AI populates them during detection)
- Natural language queries β Ask questions like "How many items do I have?" or "List my tags"
- Inventory actions β Create, update, move, or delete items through conversation
- Approval workflow β Review and approve AI-proposed changes before they're applied
- Streaming responses β Real-time AI responses with tool execution feedback
Available Tools
The chat assistant has access to 21 tools for interacting with your Homebox inventory:
Read-Only (auto-execute):
| Tool | Description |
|---|---|
list_locations |
List all locations |
get_location |
Get location details with children |
list_tags |
List all tags |
list_items |
List items with filtering/pagination |
search_items |
Search items by text query |
get_item |
Get full item details |
get_item_by_asset_id |
Look up item by asset ID |
get_item_path |
Get item's full location path |
get_location_tree |
Get hierarchical location tree |
get_statistics |
Get inventory statistics |
get_statistics_by_location |
Item counts by location |
get_statistics_by_tag |
Item counts by tag |
get_attachment |
Get attachment content |
Write (requires approval):
| Tool | Description |
|---|---|
create_item |
Create a new item |
update_item |
Update item fields |
create_location |
Create a new location |
update_location |
Update location details |
create_tag |
Create a new tag |
update_tag |
Update tag details |
upload_attachment |
Upload attachment to item |
ensure_asset_ids |
Assign asset IDs to all items |
Destructive (requires approval):
| Tool | Description |
|---|---|
delete_item |
Delete an item |
delete_location |
Delete a location |
delete_tag |
Delete a tag |
Homebox Companion uses LiteLLM as a Python library to call AI providers. You don't need to self-host anything β just get an OpenAI API key from platform.openai.com and you're ready to go. We officially support and test with OpenAI GPT models only.
Fallback Support: You can configure a secondary LLM profile in Settings that automatically activates if your primary provider fails.
Officially Supported Models
- GPT-5 mini (default) β Recommended for best balance of speed and accuracy
- GPT-5 nano
Using Other Providers (Experimental)
You can try other LiteLLM-compatible providers at your own risk. The app checks if your chosen model supports the required capabilities using LiteLLM's API:
Required capabilities (photo scanning):
- Vision β Checked via
litellm.supports_vision(model) - Structured outputs β Checked via
litellm.supports_response_schema(model)
Required for Chat assistant (in addition to the above):
- Function calling β Checked via
litellm.supports_function_calling(model). Models without native tool calling (e.g.,llava,moondream) will work for photo scanning but not for the Chat assistant, which relies on tool calls to query your inventory.
Finding model names:
Model names are passed directly to LiteLLM. Use the exact names from LiteLLM's documentation:
Common examples:
- OpenAI:
gpt-4o,gpt-4o-mini,gpt-5-mini - Anthropic:
claude-sonnet-4-5,claude-3-5-sonnet-20241022
Note: Model names must exactly match LiteLLM's expected format. Typos or incorrect formats will cause errors. Check LiteLLM's provider documentation for the correct model names.
Running Local Models:
You can run models locally using tools like Ollama, LM Studio, or vLLM. See LiteLLM's Local Server documentation for setup instructions.
Once your local server is running, configure the app:
HBC_LLM_API_KEY=any-value-works-for-local # Just needs to be non-empty
HBC_LLM_API_BASE=http://localhost:11434 # Your local server URL
HBC_LLM_MODEL=ollama/llava:34b # Your local model name
HBC_LLM_ALLOW_UNSAFE_MODELS=true # Required for most local modelsNote: Local models must support vision for photo scanning (e.g., llava, bakllava, moondream). For the Chat assistant, the model must also support function calling β most vision-only models do not. Check your model's capabilities with litellm.supports_function_calling("ollama/your-model"). Performance and accuracy vary widely.
π Full reference: See
.env.examplefor all available environment variables with detailed explanations and examples.
For a quick setup, you only need to provide your OpenAI API key. All other settings have sensible defaults.
| Variable | Required | Description |
|---|---|---|
HBC_LLM_API_KEY |
Yes | Your OpenAI API key |
HBC_HOMEBOX_URL |
No | Your Homebox instance URL (defaults to demo server) |
HBC_LINK_BASE_URL |
No | Public URL for Homebox links in chat (defaults to HBC_HOMEBOX_URL) |
βοΈ Full Configuration Reference
| Variable | Default | Description |
|---|---|---|
HBC_LLM_MODEL |
gpt-5-mini |
Model to use. Supported: gpt-5-mini, gpt-5-nano. |
HBC_LLM_API_BASE |
β | Custom API base URL (for proxies or experimental providers) |
HBC_LLM_ALLOW_UNSAFE_MODELS |
false |
Skip capability validation for unrecognized models |
HBC_LLM_TIMEOUT |
120 |
LLM request timeout in seconds |
HBC_LLM_STREAM_TIMEOUT |
300 |
Streaming timeout for large responses (e.g., hierarchical views) |
HBC_IMAGE_QUALITY |
medium |
Image quality for Homebox uploads: raw, high, medium, low |
Image Quality
Control compression applied to images uploaded to Homebox. Compression happens server-side during AI analysis to avoid slowing down mobile devices.
| Quality Level | Max Dimension | JPEG Quality | File Size | Use Case |
|---|---|---|---|---|
raw |
No limit | Original | Largest | Full quality originals |
high |
2560px | 85% | Large | Best quality, moderate size |
medium |
1920px | 75% | Moderate | Default - balanced |
low |
1280px | 60% | Smallest | Faster uploads, smaller storage |
Example:
HBC_IMAGE_QUALITY=highNote: This setting only affects images uploaded to Homebox. AI analysis always uses optimized images regardless of this setting.
Capture Limits
| Variable | Default | Description |
|---|---|---|
HBC_CAPTURE_MAX_IMAGES |
30 |
Maximum photos per capture session |
HBC_CAPTURE_MAX_FILE_SIZE_MB |
10 |
Maximum file size per image in MB |
Note: These are experimental settings. It's advisable to keep the default values to minimize data loss risk during capture sessions.
Rate Limiting
| Variable | Default | Description |
|---|---|---|
HBC_RATE_LIMIT_ENABLED |
true |
Enable/disable API rate limiting |
HBC_RATE_LIMIT_RPM |
400 |
Requests per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_TPM |
400000 |
Tokens per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_BURST_MULTIPLIER |
1.5 |
Burst capacity multiplier |
Note: Default settings are conservative (80% of OpenAI Tier 1 limits). Only configure if you have a higher-tier account or need to adjust limits.
Examples for different OpenAI tiers:
- Tier 2:
HBC_RATE_LIMIT_RPM=4000HBC_RATE_LIMIT_TPM=1600000 - Tier 3:
HBC_RATE_LIMIT_RPM=4000HBC_RATE_LIMIT_TPM=3200000
Server & Logging
| Variable | Default | Description |
|---|---|---|
HBC_SERVER_HOST |
0.0.0.0 |
Server bind address |
HBC_SERVER_PORT |
8000 |
Server port |
HBC_LOG_LEVEL |
INFO |
Logging level |
HBC_DISABLE_UPDATE_CHECK |
false |
Disable update notifications |
HBC_MAX_UPLOAD_SIZE_MB |
20 |
Maximum file upload size in MB |
HBC_CORS_ORIGINS |
* |
Allowed CORS origins (comma-separated or *) |
π Security Considerations (Production)
When deploying to production, review these security settings:
| Variable | Default | Production Recommendation |
|---|---|---|
HBC_CORS_ORIGINS |
* |
Set to specific origins (e.g., https://your-domain.com) |
HBC_AUTH_RATE_LIMIT_RPM |
10 |
Login attempts per minute per IP (brute-force protection) |
HBC_CHAT_RATE_LIMIT_RPM |
20 |
Chat messages per minute per IP (LLM cost protection) |
CORS Example:
# Allow only your frontend domain
HBC_CORS_ORIGINS=https://inventory.example.com
# Multiple origins (comma-separated)
HBC_CORS_ORIGINS=https://inventory.example.com,https://admin.example.comNote: The default
HBC_CORS_ORIGINS=*allows requests from any origin, which is convenient for development but should be restricted in production environments exposed to the internet.
π¨οΈ Label Printing
| Variable | Default | Description |
|---|---|---|
HBC_PRINT_ENABLED |
false |
Show a "Print Label" button after items are created |
When enabled, a print button appears on the post-creation screen for each item. Pressing it triggers Homebox's built-in labelmaker, which generates and prints a label via the command configured on your Homebox server.
Homebox server prerequisite: Set the HBOX_LABEL_MAKER_PRINT_COMMAND environment variable on your Homebox instance (e.g., lp -d MyPrinter %s). Without it, print requests will fail. See Homebox documentation for details.
AI Output Customization
Customize how AI formats detected item fields. Set via environment variables or the Settings page (UI takes priority).
| Variable | Description |
|---|---|
HBC_AI_OUTPUT_LANGUAGE |
Language for AI output (default: English) |
HBC_AI_DEFAULT_TAG_ID |
Tag ID to auto-apply to all items |
HBC_AI_NAME |
Custom instructions for item naming |
HBC_AI_DESCRIPTION |
Custom instructions for descriptions |
HBC_AI_QUANTITY |
Custom instructions for quantity counting |
HBC_AI_MANUFACTURER |
Instructions for manufacturer extraction |
HBC_AI_MODEL_NUMBER |
Instructions for model number extraction |
HBC_AI_SERIAL_NUMBER |
Instructions for serial number extraction |
HBC_AI_PURCHASE_PRICE |
Instructions for price extraction |
HBC_AI_PURCHASE_FROM |
Instructions for retailer extraction |
HBC_AI_NOTES |
Custom instructions for notes |
HBC_AI_NAMING_EXAMPLES |
Example names to guide the AI |
- Batch more items for faster uploads β Images are analyzed by AI in parallel (up to 30 simultaneously), so adding more items actually feels faster than one at a time.
- Include receipts in your photos β AI can extract purchase price, retailer, and date from receipt images.
- Multiple angles = better results β Include close-ups of labels, serial numbers, or barcodes for more accurate detection.
- HTTPS required for QR scanning β Native camera QR detection only works over HTTPS. On HTTP, a "Take Photo" fallback is available.
- Use the Settings page β Customize AI behavior, define custom fields, and manage LLM profiles without restarting.
- Long press to confirm all β On the review screen, long-press the confirm button to accept all remaining items at once.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.



