Flexible, provider-agnostic storage system for EventHorizon that supports multiple storage backends including local filesystem, S3-compatible storage (AWS S3, MinIO, DigitalOcean Spaces, Cloudflare R2), and more.
No configuration needed! By default, EventHorizon uses local filesystem storage:
# In your .env (or leave STORAGE_BACKEND unset)
STORAGE_BACKEND=localFiles are stored in media/ directory.
MinIO provides S3-compatible storage that runs locally:
# 1. Start MinIO with Docker
docker run -d \
--name minio \
-p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
quay.io/minio/minio server /data --console-address ":9001"
# 2. Configure EventHorizon (.env)
STORAGE_BACKEND=minio
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_STORAGE_BUCKET_NAME=eventhorizon
AWS_S3_ENDPOINT_URL=http://localhost:9000
AWS_S3_USE_SSL=False
AWS_S3_REGION_NAME=us-east-1
# 3. Create bucket and set permissions
# Visit http://localhost:9001 and create 'eventhorizon' bucketSee docs/storage/minio.md for detailed setup.
# In your .env
STORAGE_BACKEND=s3
AWS_ACCESS_KEY_ID=your-aws-access-key
AWS_SECRET_ACCESS_KEY=your-aws-secret-key
AWS_STORAGE_BUCKET_NAME=your-bucket-name
AWS_S3_REGION_NAME=us-east-1
AWS_S3_USE_SSL=True- Provider Agnostic: Switch between providers with a single environment variable
- S3-Compatible: Works with AWS S3, MinIO, DigitalOcean Spaces, Cloudflare R2, Backblaze B2
- Django Integration: Seamless integration with Django's file storage system
- Multiple Storage Classes: Public media, private files, static files
- CDN Support: Optional CDN integration for faster file serving
- Easy Migration: Migrate between providers without code changes
| Backend | Status | Use Case |
|---|---|---|
| Local Filesystem | ✅ Ready | Development |
| AWS S3 | ✅ Ready | Production |
| MinIO | ✅ Ready | Local development, Self-hosted |
| DigitalOcean Spaces | ✅ Ready | Production (cost-effective) |
| Cloudflare R2 | ✅ Ready | Production (no egress fees) |
| Supabase Storage | 🔜 Coming Soon | PostgreSQL-backed storage |
| Vercel Blob | 🔜 Coming Soon | Edge-optimized storage |
storage/
├── __init__.py # Package initialization
├── base.py # Abstract base storage class
├── s3.py # S3-compatible storage backends
├── factory.py # Factory for selecting storage backend
└── utils.py # Helper utilities
Public storage for user-uploaded files (avatars, event images, etc.):
from django.db import models
from storage.s3 import S3MediaStorage
class Profile(models.Model):
avatar = models.ImageField(storage=S3MediaStorage())Features:
- Public read access
- Unique filenames (no overwrites)
- CDN support
Private storage for sensitive files:
from storage.s3 import S3PrivateStorage
class Invoice(models.Model):
file = models.FileField(storage=S3PrivateStorage())Features:
- Private access only
- Pre-signed URLs (1 hour expiry)
- No CDN
For static files (CSS, JS):
# In settings.py
STATICFILES_STORAGE = "storage.s3.S3StaticStorage"All configuration is done via environment variables in .env:
# Storage Backend Selection
STORAGE_BACKEND=s3 # Options: local, s3, minio
# S3 Configuration
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_STORAGE_BUCKET_NAME=your-bucket-name
AWS_S3_REGION_NAME=us-east-1
# Optional: Custom endpoint (for MinIO, DigitalOcean Spaces, etc.)
AWS_S3_ENDPOINT_URL=http://localhost:9000
# Optional: SSL (disable for local MinIO)
AWS_S3_USE_SSL=True
# Optional: CDN domain
AWS_S3_CUSTOM_DOMAIN=cdn.example.com
# Optional: Use S3 for static files
USE_S3_FOR_STATIC=FalseSee .env.example for all available options.
By default, models use the configured storage backend:
class Profile(models.Model):
# Uses DEFAULT_FILE_STORAGE from settings
avatar = models.ImageField(upload_to="avatars/")Specify storage backend per field:
from storage.s3 import S3MediaStorage, S3PrivateStorage
class Event(models.Model):
# Public files
banner = models.ImageField(storage=S3MediaStorage())
# Private files
attendee_list = models.FileField(storage=S3PrivateStorage())Get storage backend programmatically:
from storage.factory import get_media_storage, get_private_storage
# Get configured media storage
media_storage = get_media_storage()
# Get private storage
private_storage = get_private_storage()Use utility functions for organized file paths:
from storage.utils import generate_avatar_path
class Profile(models.Model):
avatar = models.ImageField(upload_to=generate_avatar_path)- Storage Overview - Architecture and design
- MinIO Setup - Local development setup
- Migration Guide - Switch between providers
- Check credentials in
.env - Verify bucket exists
- Check bucket permissions (public-read for media)
- Verify network connectivity
- Check
AWS_S3_CUSTOM_DOMAINsetting - Verify bucket has public read permissions
- Check CORS configuration (for web uploads)
- Verify IAM permissions
- Check bucket policy
- Ensure credentials are correct
# Run Django checks
python manage.py check
# Test storage import
python -c "from storage import get_storage_backend; print('✓ OK')"
# Test file upload (through Django admin or web interface)
python manage.py runserver
# Go to http://localhost:8000/profile and upload an avatar- Never commit credentials - Use
.envfile (gitignored) - Use IAM roles - In AWS, prefer IAM roles over access keys
- Restrict permissions - Only grant necessary S3 permissions
- Enable versioning - Protect against accidental deletions
- Use HTTPS - Always use SSL in production (
AWS_S3_USE_SSL=True) - Rotate keys - Regularly rotate access keys
The storage system is designed to easily add new providers:
- Supabase Storage - Coming soon
- Vercel Blob - Coming soon
- Google Cloud Storage - Planned
- Azure Blob Storage - Planned
To add a new storage provider:
- Create a new file
storage/{provider}.py - Implement storage backend class
- Update
factory.pyto include new provider - Add documentation in
docs/storage/{provider}.md - Update this README
For issues or questions:
- Check the documentation
- Review technical quirks
- Open an issue on GitHub