DevStride uses Pulumi for infrastructure-as-code and GitHub Actions for CI/CD. The ds deploy commands orchestrate both, giving you a simple interface for managing cloud environments.
ds deploy up
Dispatches a GitHub Actions workflow that:
pulumi up to create/update AWS resourcesThe CLI streams the workflow logs to your terminal in real time.
# Deploy a specific branch
ds deploy up feature/auth-improvements
# Specify the stage name explicitly
ds deploy up --stage phil-auth
# Specify the branch explicitly
ds deploy up --branch feature/auth-improvements
# Custom DNS label (the final stage identifier in URLs)
ds deploy up --route phil-auth-v2
# Allow creating a second stack when one already exists for this branch
ds deploy up --new-stack
How cloud stage names are resolved:
--stage is provided, use it directly.--branch (or positional argument) is provided, compute stage from {developer}-{branch}.{developer}-{branch}) are separate from your local stage ({developer}-local). Each cloud stage gets its own AWS resources, Neon database branch, and Pulumi stack — fully isolated from your local development environment and from other cloud stages.A full stage includes approximately 50 AWS resources:
| Service | Resources |
|---|---|
| API Gateway | REST API with custom domain (api-{stage}.devstride.dev) |
| Lambda | API handler, event processors, FIFO consumers, webhook handlers |
| Cognito | User pool, app client, custom domain |
| DynamoDB | Application tables (imported from existing on first SST→Pulumi deploy) |
| S3 | Frontend hosting bucket, asset storage |
| CloudFront | CDN distribution for frontend |
| EventBridge | Event bus for domain/integration events |
| SQS/SNS | Message queues and topics (FIFO and standard) |
| Step Functions | Workflow state machines (via @pulumi/cdk) |
| Secrets Manager | Stage-specific configuration |
| IAM | Roles and policies for all services |
| Scenario | Approximate Duration |
|---|---|
| First deploy (all resources) | 8-12 minutes |
| Incremental update (code changes) | 3-5 minutes |
| Infrastructure-only change | 2-4 minutes |
| Frontend-only change | 1-2 minutes |
ds deploy down phil-feature-auth
If you omit the stage name, the CLI prompts you to select one.
The teardown is thorough:
pulumi destroy removes all AWS resources--delete-neon to also remove it--keep-stack)| Flag | Description |
|---|---|
--keep-stack | Keep the Pulumi stack state (skip pulumi stack rm) |
--delete-neon | Also delete the Neon database branch (default: kept) |
--dry-run | Preview what would be cleaned up without executing |
Teardown requires you to type the stage name to confirm:
This will destroy ALL resources for stage 'phil-feature-auth'.
Type the stage name to confirm: phil-feature-auth
ds deploy down is blocked on protected stages (dev, prod). These environments are shared and cannot be torn down from the CLI.ds deploy list
Shows a table of all deployed Pulumi stages with:
ds deploy list --stage phil-local
Shows expanded details for a single stage:
| Flag | Description |
|---|---|
--stage <name> | Show details for a single stage |
--region <region> | Filter by AWS region |
--json | Machine-readable JSON output |
When you only changed frontend code and don't want to wait for a full pulumi up, deploy just the UI:
# Build + sync to S3 + invalidate CloudFront (~60 seconds)
ds deploy frontend [stage]
# Skip rebuild — sync an existing dist/ folder
ds deploy frontend [stage] --skip-build
This is purely local — no GitHub Actions, no Pulumi. It requires infrastructure to already be deployed (ds deploy up must have run at least once). Useful for rapid UI iteration on a cloud stage.
If AWS resources were deleted or modified outside of Pulumi (via the AWS console, a partial failed deploy, or manual cleanup), pulumi up will fail with resource conflicts. Fix this with:
ds deploy refresh
This runs pulumi refresh to sync Pulumi's state file with the real AWS state. It only reads from AWS — it never modifies or deletes resources. After a refresh, re-run ds deploy up to bring the stage back to the desired state.
ds deploy up already includes automatic drift recovery — it will run pulumi refresh and retry automatically if the initial deploy fails due to drift. ds deploy refresh is for manual use when you want to inspect or fix state explicitly.Deployed stages follow predictable URL patterns:
| Stage | Type | API URL | UI URL |
|---|---|---|---|
dev | Protected | https://api.devstride.dev | https://app.devstride.dev |
prod | Protected | https://api.devstride.dev | https://app.devstride.dev |
phil-local | Local stage | https://api-phil-local.devstride.dev | https://app-phil-local.devstride.dev |
phil-feature-auth | Cloud stage | https://api-phil-feature-auth.devstride.dev | https://app-phil-feature-auth.devstride.dev |
Protected stages (dev, prod) use bare domains. Personal stages are prefixed with the stage name. Your local stage (phil-local) has the same URL structure as cloud stages.
Under the hood, ds deploy up uses the GitHub API to dispatch a workflow:
ds deploy up
│
├── Resolves stage name and branch
├── Validates AWS credentials (re-auth if needed)
├── Dispatches deploy-stage.yml via gh api
├── Streams workflow logs to terminal
│
└── GitHub Actions (deploy-stage.yml):
├── Checkout code
├── pnpm install
├── Build frontend (Vite)
├── Bundle Lambdas (esbuild)
├── pulumi up (infrastructure)
├── Upload frontend to S3
├── Invalidate CloudFront
└── Output URLs
pulumi up needs consistent build environments, access to AWS credentials, and significant CPU/memory for Lambda bundling and CDK synthesis. Running in GitHub Actions also provides an audit trail of every deployment.Check the GitHub Actions workflow run directly:
gh run list --workflow=deploy-stage.yml --limit=5
gh run view <run-id> --log
Your stage already has a Pulumi stack. This is normal for subsequent deploys — Pulumi updates in place. If you specifically need a second stack for the same branch, use --new-stack.
This can happen during deploy down. The CLI should handle this automatically by emptying buckets first. If it fails, manually empty the bucket in the AWS console, then re-run deploy down.
Each stage has fully isolated resources (different names, different ARNs). If you see conflicts, it usually means the stage name collision — check ds deploy list for duplicate stage names.