

VESSL Cloud
Documentation
On-demand GPUs, Docker-based environments, and team collaboration — built for ML.
Why VESSL Cloud
- Instant: launch GPU instances with JupyterLab and SSH in a few clicks
- Consistent: Docker-based images ensure identical environments across teams/projects
- Scalable: scale GPU specs/count as needed; Pause/Terminate when done
- Collaborative: share data and models with team volumes
- Cost visibility: clear billing states for running/paused/terminated
Quickstart
Create your first Workspace in minutes.
Organization
Admin scope, policies, billing, responsibilities.
Team
Collaboration model and shared resources.
Workspace (Run GPU instances)
GPU/CPU containers and cost controls.
More in introduction
Roles & permissions
Admin vs Member responsibilities and scopes.
Cluster
GPU/CPU execution environments for workspaces.
Storage
Cluster storage vs Object storage.
Billing
Credit model and cost states.
