Nubium Nubium Cloud
by Cloudflavor Cloudflavor

Cloud GPU Orchestration For European SMEs

Smart GPU scheduling on EU clouds. Pay per job, not 24/7. BYOC for Exoscale, Scaleway, Hetzner. Optional datacenter bridge for hybrid workflows. GitOps deployment, preventive EU data policies.

For Investors
Discover the Platform
Platform Architecture

GitOps ML Platform Built for Hybrid Cloud

Our operator manages your ML infrastructure. Agents execute jobs. Git defines everything. Works across datacenter and cloud.

GitOps Native

Define infrastructure as code. Push to git, automatic deployment via Nubium Cloud.

$ git push origin main
✓ Deploying to cloud

Simple CLI

Deploy your entire project to your staging workspace. No complex configs.

$ nubium deploy . --workspace staging
📦 Packaging project...
🔐 Uploading to control plane...
🚀 Deploying to GPU cluster...
✅ Job running: job-id-xyz

No YAML. Ever.

Real code with Starlark. Functions, not strings. Logic, not templates.

# spark_etl.star
SparkJob(
    name = "customer-etl",
    jar = "s3://jars/etl.jar",
    main_class = "com.company.ETL",
    
    driver = Driver(
        memory = "8Gi",
        cores = 2
    ),
    
    executors = Executors(
        instances = 10,
        memory = "32Gi",
        cores = 4,
        gpu = True  # RAPIDS acceleration
    ),
    
    conf = {
        "spark.rapids.sql.enabled": True,
        "spark.sql.adaptive.enabled": True
    }
)

Agent-Based

Secure agents handle job execution. Control plane orchestrates. Zero manual setup.

→ Control plane manages agents
→ Automatic provisioning

Smart Scheduling

GPUs spin up when job starts, shut down when done. No idle costs. Optional DC bridge for data staging.

job submitted → GPU allocated → job done → GPU released
✓ Pay only for actual usage

Define Infrastructure

Provision clusters, configure providers, set resource limits. All in code.

# infrastructure.star
Cluster(
    name = "prod-gpu-cluster",
    provider = Exoscale,
    region = "ch-gva-2",
    
    nodes = NodePool(
        gpu_nodes = 4,
        gpu_type = "A100",
        cpu_nodes = 8
    )
)

Define Policies

Control where data lives and compute runs. Enforced before jobs start.

# policies.star
Policy(
    name = "eu_data_sovereignty",
    
    Deny(
        When(
            data_region = "eu",
            compute_region = Not("eu-*")
        ),
        message = "EU data must stay in EU"
    )
)
FeatureSageMakerVertex AINubium
InfrastructureAWS OnlyGCP OnlyYour Accounts
Deployment ModelProprietary APIProprietary APIGitOps
On-Premise Support
Data LocationAWS RegionsGCP RegionsYour Choice
Platform FeeIncluded in computeIncluded in compute€299-2999/month
The Opportunity

Smart GPU Scheduling Pay Only When Training

Cloud GPUs that spin up for your job, then shut down. No 24/7 instances. Optional: connect your datacenter for development and data staging. One platform manages both.

GDPR and compliance challenges

Cloud GPUs Are Expensive

A100 instances cost €3-5/hour. Running 24/7 for availability = €2,000-3,600/month per GPU. Most time spent idle.

Platforms Lock You In

SageMaker, Vertex AI force their infrastructure. Can't use EU providers. Can't integrate your datacenter. Their way only.

Manual Scheduling

You manually start/stop instances. Forget to shut down? Pay all weekend. Need GPUs at 3am? Wake up to start them.

BYOC

Bring Your Own Cloud Your Infrastructure, Your Control

Your infrastructure, your rules. Use your existing cloud accounts, datacenter hardware, or bare metal servers. We orchestrate across all of them with a single control plane.

Cloud Journey

Beta Preview

Exoscale Exoscale

Swiss precision, ready today

Q1 2026

Hetzner
Scaleway

EU cloud dominance

2026

AWS + Azure + GCP

Global scale

Your Datacenter

Deploy to your existing virtualization infrastructure.

On-Premise VMware OpenStack Proxmox

Your Policies

Define where data lives, where compute runs, and how resources are allocated.

Data Residency Compute Location Cost Limits
The Solution

One Platform Three Simple Steps

Connect your infrastructure. Configure policies. Run ML workloads. Everything managed through Git.

Three step process
1

Connect

Link your infrastructure and cloud accounts. Agents auto-provision in your datacenter. Secure WireGuard tunnels connect everything.

2

Configure

Define policies: where data lives, where compute runs. Simple TOML configuration. GitOps deployment.

3

Run

Push code to git. Platform stages data to cloud, allocates GPUs, runs training. Pay only for GPU time used.

Benefits

Why Hybrid Beats Pure Cloud

Monthly Costs (€)Traditional ML Platforms24/7 billingHybridPay per jobTrain on-premise, store in cloud, orchestrate efficiently70-90% Lower TCO*

Use Existing Infrastructure

Leverage your datacenter investment. No need to migrate everything to cloud.

EU Providers = Lower Egress Costs

Exoscale: 1TB/instance pooled. Hetzner EU: 20TB included. Scaleway: No egress fees. AWS: $0.09/GB after 100GB.

Burst When Needed

Use cloud for peaks, datacenter for baseline. Optimal cost structure.

Maintain Compliance

Keep sensitive data on-premise while using cloud for compute.

The Architecture

How EU Cloud + Datacenter Actually Works

Leverage generous bandwidth allowances from EU providers. Train models on cloud GPUs with predictable costs. Keep data sovereignty while accessing on-demand compute.

Your Datacenter

  • • Your existing GPU infrastructure
  • • On-premise data processing
  • • Full control over sensitive data
  • • Optimize hybrid cloud costs

EU/Swiss Cloud

  • • Exoscale (V100, A30, A40)
  • • Scaleway (H100, L4 @ €0.75/hr)
  • • Hetzner (RTX dedicated servers)
  • • Your accounts, lower egress costs

How It Works

1

Nubium Installs

We deploy our platform on your infrastructure. Fully managed setup, no manual installation.

2

Define Infrastructure

Configure ML workloads via YAML. Push to git repository.

3

Submit Jobs

Use CLI to submit training or inference jobs to your agents.

4

Orchestrate Workloads

Control plane manages job execution across your hybrid infrastructure.

Cost Model Difference

Traditional ML Platforms: Pay 24/7
Nubium Platform: Pay per job

You bring your own infrastructure. We provide the orchestration. Platform fee: €299-2999/month based on usage.

GPU Orchestration

Intelligent Job Scheduling Efficient GPU Utilization

Our agent schedules GPU jobs on bare metal or VMs with PCIe passthrough. High-performance data layer with intelligent caching. Zero complexity for you.

GPU Node

Bare Metal / VM with Passthrough

  • NVIDIA/AMD GPUs
  • PCIe Passthrough
  • Direct Hardware Access

Tomis Agent

Lightweight Scheduler

  • Job Scheduling
  • Resource Management
  • Health Monitoring

Container Runtime

Isolated Execution

  • Container Isolation
  • GPU Device Mapping
  • Volume Mounts

Data Flow Architecture

Cloud Storage (Delta/Parquet)
Data Layer
Container Volume
GPU Compute

Distributed filesystem with local caching. Fast data access for training workloads. Optimized for ML performance.

Smart Data Caching

High-performance distributed filesystem with local caching. Optimized for ML training workloads.

Simple Deployment

Single binary deployment. Works on any Linux with GPU drivers installed.

Pay-Per-Use

Spin up GPU nodes only when needed. Automatic shutdown after jobs complete. No GPU idle costs.

Workspaces

Your Team's Analytics Command Center

Isolated environments where data teams collaborate. JupyterHub, Spark clusters, and storage - all in one place.

Everything Your Team Needs

JupyterHub for notebooks, managed compute clusters for big data, cloud storage for artifacts. Each workspace is completely isolated with its own resources.

ML-First Infrastructure

Built-in MLflow for experiment tracking and model registry. Delta Lake enabled for reliable data pipelines. GPU support for training deep learning models.

Minimal Setup

Pre-configured with pandas, scikit-learn, TensorFlow, PyTorch, XGBoost. Auto-shutdown saves costs. Real-time monitoring shows exactly what's running.

Data science workflow visualization
Data Layer

Delta Lake Format on Distributed Storage

ACID transactions via Delta Lake libraries. Distributed filesystem with caching. Optimized for datasets up to 200GB, with automatic staging for larger workloads.

Unified Batch & Streaming

Single source of truth for both real-time and historical data analytics.

Schema Evolution

Add columns, change data types. Your pipelines don't break when schemas change.

Time Travel

Query data as it was yesterday, last week, or last month. Perfect for debugging and audits.

A
Atomicity
C
Consistency
I
Isolation
D
Durability
StreamingBatchAnalyticsUnified Storage
Time Travel
Schema Evolution
ACID Guarantees
Data Connectors

Connect to Everything Query from Anywhere

Native connectors for databases and cloud storage. Direct access to your data sources.

Object Storage Object Storage
Snowflake Snowflake
BigQuery BigQuery
Nubium
PostgreSQL PostgreSQL
Apache Kafka Kafka
Redis Redis
GitOps Native

Enterprise ML Stack Runs Anywhere You Need

Delta Lake, MLflow, Jupyter - the tools you know. Deployed via GitOps to datacenter OR cloud. Push to deploy, anywhere.

DATA SOURCES

Object Storage
Delta Lake
Databases

PROCESSING

Spark Clusters
GPU Compute

ML PLATFORM

MLflow
Experiments
Model Registry

APPLICATIONS

Jupyter
API Services
AI Agents
Managed Spark

Serverless Spark

Submit job, cluster spins up, job runs, cluster shuts down. Pay only for execution time. GPU-enabled with RAPIDS.

GPU support

GPU-Optimized

Access to latest GPUs across European data centers through your cloud accounts.

GitOps

GitOps Native

Push to deploy. Infrastructure as code. Automatic rollbacks and version control.

Under the Hood

Built on Battle-Tested Open Source

Standard tools you already know. No proprietary formats. Take your configs anywhere.

The Stack

Cloud Provider Choice

Deploy to your preferred EU cloud provider

Exoscale, Hetzner, Scaleway

GitOps Engine

Push to Git, infrastructure and config update automatically

Dockerfiles, MLflow jobs, environments - all in Git

Data Platform

Apache Spark + Delta Lake + Cloud Storage

ACID transactions, distributed storage

ML Tooling

JupyterHub + MLflow + distributed training frameworks

GPU support, experiment tracking, managed clusters

How It Deploys

# 1. Define your infrastructure in Git
infrastructure/
├── clusters/
│   ├── exoscale-gpu.yaml
│   └── hetzner-storage.yaml
├── apps/
│   ├── spark/config.yaml
│   ├── jupyter/config.yaml
│   └── mlflow/config.yaml
└── policies/
    └── data-residency.yaml

# 2. Push to Git
$ git add . && git commit -m "Add GPU cluster"
$ git push origin main

# 3. Nubium handles the rest
✓ Provisions Exoscale GPU nodes
✓ Deploys Spark clusters
✓ Applies your configs
✓ Manages lifecycle

# 4. Submit jobs via CLI
$ nubium submit train.py
$ nubium logs job-123
$ nubium status

On-Demand GPU Jobs

  • • Provision GPU for specific job
  • • Run training or inference
  • • Save results and shutdown
  • • Pay only for job duration

Deploy to Exoscale for Swiss data sovereignty. Or Hetzner for German compliance. Or your own datacenter.

Standard container orchestration. You own the infrastructure.

Everything You Need Out of the Box

A complete ML platform from day one. No assembly required.

Complete package

Jupyter Notebooks

GPU-accelerated notebooks with automatic resource management

Experiment Tracking

MLflow-compatible experiment logging and comparison

Model Registry

Version, stage, and deploy models with confidence

Burst Spark Clusters

Spark clusters that spin up for your job, then shut down. GPU-enabled. No idle costs.

Delta Lake Storage

ACID transactions, time travel, and schema evolution

Team Workspaces

Isolated environments with shared resources

GitOps Everything

Infrastructure as code, always in sync

Built for Teams That Value Control

Teams that need control

European Companies

Organizations that need true data sovereignty with EU-based infrastructure.

Hybrid Cloud Teams

Engineering groups with on-premise infrastructure who need cloud burst capabilities.

Cost-Conscious Startups

Growing companies that want enterprise features without enterprise pricing.

Platform Teams

DevOps and MLOps engineers who need flexibility to customize their stack.

Research Groups

Academic and R&D teams needing reproducible, scalable experiment infrastructure.

Why Choose Nubium

Freedom to Build Your Way

Stop being locked into expensive platforms. Start owning your ML infrastructure.

Freedom to build your way

Cloud Provider Freedom

Choose your preferred EU cloud provider. Switch providers without vendor lock-in. Keep data where regulations require.

True Hybrid Cloud

Connect your datacenter with Exoscale, Hetzner, or Scaleway. Process data anywhere, store it where required.

GitOps from Day One

Version control everything. Roll back anything. Collaborate naturally. Infrastructure as code that actually works.

Pay Only for Compute

No per-user fees, no feature gates, no surprise bills. Pay your cloud provider for resources, nothing more.

EU Data Sovereignty

Keep your data in Europe, maintain EU data residency, and avoid US cloud act complications.

Deploy in Minutes

Pre-configured templates for common ML workloads. Customize everything later, ship today.

Managed Infrastructure

Focus on ML, not ops. We handle orchestration, scaling, and reliability. You ship models.

*Disclaimer: Pricing estimates and cost savings are based on current cloud provider rates and typical usage patterns. Actual costs may vary depending on specific workloads, data transfer volumes, and cloud provider pricing changes. Features marked as "Coming Q1 2026" are planned but subject to change. Beta features may have limited functionality.

Limited Beta Access Early Beta Q1 2026

Be Among the First

Get early access to Nubium Cloud. Help shape the future of European AI infrastructure.

* indicates required fields

Your Information

Technical Details

Nubium Nubium

GitOps platform for hybrid cloud ML. Bridge datacenter to cloud. European data sovereignty guaranteed.

Platform

  • Coming Soon

Resources

  • Coming Soon

© 2025 Nubium. All rights reserved. 🇪🇺 Built in Europe, for Europe.

Nubium Cloud is a product by Cloudflavor GmbH