Complete guide for setting up and working with the FilantropiaSolar development environment
For the fastest setup, use our automated development script:
# Run the automated setup script
./scripts/dev-setup.shThis will:
- Check Python version compatibility
- Set up virtual environment
- Install all dependencies
- Configure pre-commit hooks
- Run initial tests
- Create development scripts
If you prefer manual setup or need more control:
# 1. Create virtual environment
python3 -m venv venv
source venv/bin/activate
# 2. Install development dependencies
pip install -r requirements-dev.txt
pip install -e .
# 3. Setup pre-commit hooks
pre-commit install --install-hooks
# 4. Copy environment template
cp .env.template .env
# Edit .env with your configuration
# 5. Run tests to verify setup
pytest tests/ -v- Python 3.11+ (3.14 recommended)
- Git for version control
- Docker & Docker Compose (optional, for containerized development)
- Make (optional, for using Makefile commands)
# Install Python via Homebrew
brew install [email protected]
# Install Docker Desktop
brew install --cask docker
# Install additional tools
brew install git make# Ubuntu/Debian
sudo apt update
sudo apt install python3.11 python3.11-venv python3.11-dev
sudo apt install git make docker.io docker-compose
# Enable Docker for current user
sudo usermod -aG docker $USERFilantropiaSolar/
βββ src/ # Source code
β βββ filantropia_solar/ # Main package
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ scripts/ # Development scripts
βββ config/ # Configuration files
βββ .github/ # GitHub workflows
βββ pyproject.toml # Project configuration
βββ requirements.txt # Production dependencies
βββ requirements-dev.txt # Development dependencies
βββ Dockerfile # Container definition
βββ docker-compose.yml # Multi-service orchestration
βββ Makefile # Development commands
βββ .pre-commit-config.yaml # Code quality hooks
# Activate virtual environment
source venv/bin/activate
# Deactivate when done
deactivate
# Recreate virtual environment if needed
rm -rf venv
make venv
make install# Install production dependencies only
make install-prod
# Install all development dependencies
make install
# Update dependencies
make update-deps
# Check for outdated packages
make deps-check# Run all tests
make test
# Run tests with coverage
make test-cov
# Run specific test categories
make test-unit # Unit tests only
make test-integration # Integration tests only
make test-ml # ML model tests
make test-performance # Performance benchmarks- Unit Tests: Fast, isolated component tests
- Integration Tests: Component interaction tests
- ML Tests: Machine learning model validation
- Performance Tests: Benchmarking and profiling
# Example test structure
import pytest
from filantropia_solar.prediction import EnergyPredictor
class TestEnergyPredictor:
@pytest.fixture
def predictor(self):
return EnergyPredictor(mock_data_processor)
def test_prediction_basic(self, predictor):
result = predictor.predict(installation_id="test", date="2023-06-15")
assert result.energy_kwh > 0
@pytest.mark.slow
def test_model_training(self, predictor):
# Slow test that trains actual models
pass
@pytest.mark.ml
def test_model_accuracy(self, predictor):
# ML-specific test
pass# Format code
make format
# Run linting
make lint
# Type checking
make typecheck
# Security scanning
make security
# Run all quality checks
make qualityPre-commit hooks automatically run on every commit:
# Install hooks (done by dev-setup.sh)
pre-commit install
# Run hooks manually
pre-commit run --all-files
# Update hooks
pre-commit autoupdate- Line length: 88 characters (Black default)
- Import sorting: Use ruff/isort configuration
- Type hints: Required for all public APIs
- Docstrings: Google-style for all public functions
- Variable naming:
snake_casefor variables,UPPER_CASEfor constants
# Build documentation
make docs
# Serve documentation locally
make docs-serve
# Deploy to GitHub Pages
make docs-deploy- API Reference: Auto-generated from docstrings
- User Guide: Step-by-step usage instructions
- Developer Guide: Architecture and development info
- Examples: Jupyter notebooks and code samples
def predict_energy(installation_id: str, date: datetime) -> PredictionResult:
"""Predict energy production for a specific installation and date.
Args:
installation_id: Unique identifier for the PV installation
date: Date for which to predict energy production
Returns:
PredictionResult containing energy predictions and metadata
Raises:
ValueError: If installation_id is not found
ModelNotTrainedError: If no trained model exists
Example:
>>> predictor = EnergyPredictor(data_processor)
>>> result = predictor.predict_energy("Lisbon_1", datetime(2023, 6, 15))
>>> print(f"Predicted energy: {result.energy_kwh:.2f} kWh")
"""# Build production image
make docker-build
# Build development image
make docker-build-dev
# Run container
make docker-run# Start all services
make docker-up
# Start with monitoring
docker-compose up --profile monitoring
# Start with GUI support
docker-compose up --profile gui
# View logs
make docker-logs
# Stop all services
make docker-down# Run development container with code mounting
docker-compose up development
# Execute commands inside container
docker-compose exec filantropia-api bash# CPU profiling
make profile
# Memory profiling
make memory-profile
# Benchmark performance
make benchmark- Use
polarsinstead ofpandasfor large datasets - Implement async/await for I/O operations
- Use
lru_cachefor expensive computations - Profile before optimizing
# Good: Use context managers
with data_processor.load_data(installation_id) as data:
result = process_data(data)
# Good: Use generators for large datasets
def process_records():
for record in data_stream:
yield process_record(record)
# Good: Explicit cleanup
del large_dataframe
gc.collect()# Never commit secrets to version control
echo "SECRET_KEY=your-secret-here" >> .env
# Use environment-specific files
cp .env.template .env.development
cp .env.template .env.production# Run security scan
make security
# Check for secrets in commits
git secrets --scan
# Update dependencies for security patches
make update-depsimport logging
from loguru import logger
# Use structured logging
logger.info("Processing installation",
installation_id=installation_id,
date=date.isoformat())
# Different log levels
logger.debug("Detailed debug information")
logger.info("General information")
logger.warning("Warning message")
logger.error("Error occurred", exc_info=True)# Interactive debugging with ipdb
pip install ipdb
import ipdb; ipdb.set_trace()
# Memory profiling
from memory_profiler import profile
@profile
def my_function():
# Function to profile
pass
# Performance profiling
import cProfile
cProfile.run('my_function()', 'profile_stats')# Update version in pyproject.toml
# Create git tag
git tag -a v2.0.0 -m "Release version 2.0.0"
git push origin v2.0.0
# Build and check package
make build
make build-check
# Release to test PyPI
make release-test
# Release to PyPI
make releaseThe GitHub Actions workflow automatically:
- Runs tests on multiple Python versions
- Performs code quality checks
- Builds documentation
- Creates Docker images
- Releases to PyPI on tags
-
Create feature branch
git checkout -b feature/new-awesome-feature
-
Write tests first (TDD approach)
# Create test file touch tests/test_new_feature.py # Write failing tests make test
-
Implement feature
# Add implementation # Run tests to verify make test
-
Check code quality
make quality make test-cov
-
Commit with conventional commits
git add . git commit -m "feat: add awesome new feature"
-
Push and create PR
git push origin feature/new-awesome-feature # Create Pull Request on GitHub
-
Reproduce the issue
# Create minimal reproduction case make test-integration # Run relevant tests
-
Add logging
logger.debug("Debug info", data=data)
-
Use debugger
import ipdb; ipdb.set_trace()
-
Profile if performance-related
make profile make memory-profile
-
Measure first
make benchmark # Get baseline -
Profile bottlenecks
make profile # Identify slow functions -
Optimize critical paths
# Use appropriate data structures # Implement caching # Use vectorized operations
-
Measure again
make benchmark # Compare results
- Check logs:
logs/application.log - Run diagnostics:
make env-info - Check dependencies:
make deps-check - Search issues: GitHub Issues tab
- Ask questions: Create new GitHub Issue
- Profile the application:
make profile - Check resource usage:
htop,docker stats - Review logs: Look for warning messages
- Check database: Monitor query performance
| Problem | Solution |
|---|---|
| Import errors | pip install -e . |
| Test failures | make clean && make test |
| Pre-commit issues | pre-commit clean |
| Docker issues | docker system prune |
| Memory issues | Check model sizes, use pagination |
- Virtual environment activated
- Dependencies installed (
make install) - Pre-commit hooks working (
pre-commit run) - Tests passing (
make test) - Environment configured (
.envfile)
- All tests pass (
make test-cov) - Code formatted (
make format) - No linting errors (
make lint) - Type checking passes (
make typecheck) - Documentation updated
- Commit message follows conventions
- Version updated in
pyproject.toml - CHANGELOG.md updated
- Documentation built (
make docs) - All tests pass in CI
- Docker image builds successfully
- Security scan clean (
make security)
Happy coding! π
For additional help, see the main README.md or create an issue on GitHub.