A cloud-native Kubernetes controller for managing time-based machine scheduling on physical nodes using Cluster API (CAPI).
5-Spot automatically adds and removes machines from your CAPI clusters based on time schedules. Perfect for:
- Cost optimization: Only run expensive hardware during business hours
- Resource management: Scale clusters based on predictable workload patterns
- Energy efficiency: Reduce power consumption during off-hours
- Testing and staging: Automatically provision test environments
- ⏰ Time-based scheduling with timezone support
- 📅 Flexible schedules - Support for day ranges (mon-fri) and hour ranges (9-17)
- 🔄 Graceful shutdown - Configurable grace periods with automatic node draining
- 🎯 Priority-based - Resource distribution across controller instances
- 🚨 Kill switch - Emergency immediate removal capability
- 📊 Multi-instance - Horizontal scaling with consistent hashing
- 🔍 Full observability - Prometheus metrics and health checks
- Kubernetes cluster (1.27+)
- kubectl configured
- Cluster API (CAPI) installed
- Apply the CRD:
kubectl apply -f deploy/crds/scheduledmachine.yaml- Deploy the controller:
kubectl apply -R -f deploy/deployment/- Create a ScheduledMachine:
kubectl apply -f examples/scheduledmachine-basic.yamlapiVersion: 5spot.finos.org/v1alpha1
kind: ScheduledMachine
metadata:
name: business-hours-machine
namespace: default
spec:
schedule:
daysOfWeek:
- mon-fri
hoursOfDay:
- 9-17
timezone: America/New_York
enabled: true
machine:
address: 192.168.1.100
user: admin
port: 22
files: []
bootstrapRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: worker-bootstrap-config
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: MachineTemplate
name: worker-machine-template
clusterName: my-k0s-cluster
priority: 50
gracefulShutdownTimeout: 5m- Rust 1.75+ (install via rustup)
- Python 3.10+ (for documentation)
- Poetry (install via
curl -sSL https://install.python-poetry.org | python3 -)
To build Docker images for Linux from macOS, you need a cross-compilation toolchain. The recommended approach is using the cross tool:
# Install cross (recommended - handles everything via Docker)
cargo install cross
# Build Docker images
make docker-build # Auto-detect, defaults to linux/amd64
make docker-build-amd64 # Explicitly build for linux/amd64
make docker-build-arm64 # Explicitly build for linux/arm64Alternative: Install GNU cross-compilation toolchains directly (for faster builds without Docker overhead):
# Add the cross-toolchains tap
brew tap messense/macos-cross-toolchains
# For linux/amd64 from macOS
brew install x86_64-unknown-linux-gnu
rustup target add x86_64-unknown-linux-gnu
# For linux/arm64 from macOS
brew install aarch64-unknown-linux-gnu
rustup target add aarch64-unknown-linux-gnuThe project includes .cargo/config.toml with linker configuration for these targets. If building in a different directory, create this config:
# .cargo/config.toml
[target.x86_64-unknown-linux-gnu]
linker = "x86_64-unknown-linux-gnu-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-unknown-linux-gnu-gcc"Note: The rustup target add alone is not sufficient - crates with C dependencies (like ring) require a linker from the GNU toolchain.
If you're in an air-gapped environment or behind a corporate firewall without access to public registries (crates.io, static.rust-lang.org, pypi.org, gcr.io), you'll need to configure alternative registries.
Create a .cargo/config.toml in a dedicated directory to route Rust crates through your Artifactory mirror:
mkdir -p ~/.cargo-airgap
cat > ~/.cargo-airgap/config.toml << 'EOF'
# Air-gapped Cargo configuration using Artifactory
[registry]
default = "artifactory"
[registries.artifactory]
index = "sparse+https://artifactory.example.com/artifactory/api/cargo/crates-io-remote/index/"
[source.artifactory-remote]
registry = "sparse+https://artifactory.example.com/artifactory/api/cargo/crates-io-remote/index/"
[source.crates-io]
replace-with = "artifactory-remote"
EOFThen use AIRGAP_CARGO_HOME when building:
# Build binaries using Artifactory registry
AIRGAP_CARGO_HOME=~/.cargo-airgap make docker-build-amd64
# Or for arm64
AIRGAP_CARGO_HOME=~/.cargo-airgap make docker-build-arm64Set PYPI_INDEX_URL for documentation builds:
export PYPI_INDEX_URL=https://artifactory.example.com/api/pypi/pypi/simple
make docs-serveFor Docker builds, configure buildx to trust your Artifactory mirrors for container images:
# Create buildx config directory
mkdir -p ~/.docker/buildx
# Create buildkitd.toml for Artifactory registries
cat > ~/.docker/buildx/buildkitd.toml << 'EOF'
# BuildKit daemon configuration for air-gapped environments
# Skip TLS verification for internal registries
[registry."artifactory.example.com"]
insecure = true
# Add mirrors for common registries (gcr.io, ghcr.io)
[registry."oss-docker-gcr.artifactory.example.com"]
insecure = true
[registry."oss-docker-ghcr.artifactory.example.com"]
insecure = true
EOF
# Create the buildx builder with the config
docker buildx create --name fivespot-builder --config ~/.docker/buildx/buildkitd.toml
docker buildx use fivespot-builderThen build with your mirrored base image:
make docker-build-amd64 BASE_IMAGE=oss-docker-gcr.artifactory.example.com/distroless/cc-debian12:nonroot# Set environment variables
export AIRGAP_CARGO_HOME=~/.cargo-airgap
export PYPI_INDEX_URL=https://artifactory.example.com/api/pypi/pypi/simple
export BASE_IMAGE=oss-docker-gcr.artifactory.example.com/distroless/cc-debian12:nonroot
# Build Docker image for amd64
make docker-build-amd64
# Or build for arm64
make docker-build-arm64Add these to your shell profile (~/.bashrc, ~/.zshrc) for persistence:
export AIRGAP_CARGO_HOME=~/.cargo-airgap
export PYPI_INDEX_URL=https://artifactory.example.com/api/pypi/pypi/simple# Build the project
cargo build --release
# Generate CRDs
cargo run --bin crdgen > deploy/crds/scheduledmachine.yaml
# Generate API documentation
cargo run --bin crddoc > docs/reference/api.md
# Run tests
cargo test
# Run with formatting and linting
cargo fmt
cargo clippy -- -D warnings# Serve documentation locally with live reload
make docs-serve
# Build all documentation (MkDocs + rustdoc)
make docs
# Build only Rust API docs
make docs-rustdoc5-Spot includes gitleaks for secret scanning to prevent accidental credential exposure.
# Run a one-time scan of the repository
make gitleaks
# Install pre-commit hook (recommended for all developers)
make install-git-hooks
# Run all local security scans
make security-scan-local-
Install gitleaks (automatic with make targets):
make gitleaks-install
This downloads and installs gitleaks with checksum verification.
-
Install pre-commit hook (prevents committing secrets):
make install-git-hooks
This creates a
.git/hooks/pre-commithook that scans staged changes before each commit. -
Configure allowlists (for false positives):
Edit
.gitleaks.tomlto add exceptions:[allowlist] paths = [ '''tests/fixtures/''', # Test data with fake secrets ] regexes = [ '''example-token-.*''', # Example tokens in docs ]
Gitleaks runs automatically in CI via the security scan workflow. Failed scans will:
- Block PR merges
- Create GitHub issues for detected secrets
- Generate reports in workflow artifacts
False positives: Add patterns to .gitleaks.toml allowlist
Pre-commit too slow: Use gitleaks protect --staged (default) instead of full repo scan
Secrets in history: Use BFG Repo-Cleaner to remove from git history
src/
├── main.rs # Entry point and controller setup
├── lib.rs # Library exports
├── crd.rs # CRD definitions (source of truth)
├── crd_tests.rs # CRD tests
├── constants.rs # Global constants
├── labels.rs # Kubernetes labels
├── reconcilers/ # Reconciliation logic
│ ├── mod.rs
│ ├── scheduled_machine.rs
│ ├── helpers.rs
│ └── scheduled_machine_tests.rs
└── bin/
├── crdgen.rs # CRD YAML generator
└── crddoc.rs # Documentation generator
- Pending → Initial state, evaluating schedule
- Scheduled → Within time window, being added to cluster
- Active → Running and part of cluster
- Removing → Grace period, node draining, preparing for removal
- Inactive → Removed from cluster
- UnScheduled → Outside time window
- Error → Recoverable error state
During the Removing phase, the controller performs automatic node draining:
- Cordon - Marks the node as unschedulable
- Evict pods - Gracefully evicts all pods (except DaemonSets)
- Timeout - Respects
nodeDrainTimeoutconfiguration - Delete - Removes the CAPI Machine after drain completes
Configure drain behavior in your ScheduledMachine:
spec:
gracefulShutdownTimeout: 5m # Grace period before draining starts
nodeDrainTimeout: 5m # Maximum time for node drainThe controller evaluates schedules every 60 seconds:
- Checks current day against
daysOfWeek - Checks current hour against
hoursOfDay - Respects configured timezone
- Handles grace periods for smooth transitions
Use leader election to run multiple controller replicas for high availability:
# Set environment variables
ENABLE_LEADER_ELECTION=true
LEASE_NAME=5spot-leader--enable-leader-election/ENABLE_LEADER_ELECTION: Enable leader election (default: false)--lease-name/LEASE_NAME: Lease resource name (default: 5spot-leader)--metrics-port/METRICS_PORT: Metrics server port (default: 8080)--health-port/HEALTH_PORT: Health check port (default: 8081)--verbose/-v: Enable verbose logging
See API Reference for complete field documentation.
Prometheus metrics exposed on port 8080:
five_spot_up: Operator health- (More metrics to be implemented)
/health: Liveness probe (port 8081)/ready: Readiness probe (port 8081)
Contributions welcome! Please:
- Follow Rust best practices
- Add tests for new features
- Update documentation
- Run
cargo fmtandcargo clippy - Ensure all tests pass
- Install git hooks:
make install-git-hooks(prevents committing secrets)
- Slack:
#5-spoton the FINOS Slack workspace. Join the workspace at https://finos.org/slack if you don't already have access. Use#5-spotfor usage questions, schedule and reconciliation behaviour, CAPI integration questions, and release coordination. - GitHub Issues: https://github.com/finos/5-spot/issues for bugs, feature requests, and CRD schema discussion. Security-sensitive reports should follow the process in Security below, not Issues.
- GitHub Discussions: https://github.com/finos/5-spot/discussions for longer-form design and architecture conversations that outgrow a Slack thread.
- Gitleaks: Pre-commit and CI secret scanning
- Signed Commits: Recommended for all contributors
- VEX (OpenVEX): Every release ships a signed OpenVEX document
(
vex.openvex.json+.bundle) that records per-CVE triage decisions (not_affected/affected/fixed/under_investigation). Consumers can feed it intogrype --vexortrivy --vexto suppress findings 5-Spot has already triaged as non-exploitable. See docs/src/security/vex.md and.vex/README.md.
Report security issues to the maintainers.
- Apache License, Version 2.0 (LICENSE-APACHE)