Run cloud GPUs from your terminal

Provision and sync GPU processes across RunPod, Vast & more.
Sync results. Auto-stop when idle. $0 markup.

Simpicity of the CLI, Power of the Cloud

Purpose-built to save you time and money

Job Persistence

Close your laptop or go to sleep. Your jobs keep running. Reconnect anytime.

$0

Idle Charges

Auto-stop kicks in 5 minutes after your job finishes & never mid-run. Say goodbye to forgotten pods.

<6s

Rapid Connection

Skip the 30-45s wait for public IPs. Connect via relay almost instantly.

Choose your own hardware

From RTX 5090 to H100. Prices set by your preferred provider, sans markup.

RTX 5090
32GB VRAM
LoRA training, image generation
~$0.89/hr*
A40
48GB VRAM
Production inference, large models
~$0.40/hr*
A100 PCIe
80GB VRAM
LLM fine-tuning, transformers
~$1.39/hr*
H100 PCIe
80GB VRAM
Frontier research, massive models
~$2.39/hr*

*Approximate RunPod pricing shown for comparison. Actual rates vary by provider. View current pricing

Your code, your keys, your data

Credentials in OS keychain

Your GPU provider API key stays in your system keychain. Never stored in config files or transmitted to us.

SSH-secured transport

Code syncs directly to your GPU provider over SSH or secure tunnels. On RunPod, data at rest relies on provider encryption; supported providers can also use GPU CLI-managed LUKS.

Automatic result sync

Output files sync back to your machine as they're created. No manual downloads needed.

Config-driven setup

Reads your pyproject.toml or gpu.jsonc. Define dependencies once, run anywhere.

Pro • $29/mo

Go Pro for commercial work

Get a commercial license and priority email support today. Hard spend caps and deeper cost reporting are on the roadmap.

Get Started
Cost tracking & exportSoon
Hard spend caps that actually stop podsSoon
Commercial license included
Priority email support

Questions

Ready to run?

Install in 10 seconds. Start for free.