But as far as I can tell what that is really saying is that it’s a snippit inside the run() {} function of your sst.config.ts file which looks more like this: Config | SST
I imagine that’s what you were running into. At least that was the issue for me when I hit that error.
]]>package.json import { Util } from "@notes/core/util";
Specifically, where does @notes/* get defined?
I recently switched my AWS CLI credentials to my personal account and then switched back to my work account. However, when I run sst dev, it seems to recreate all the resources instead of using the existing ones. It also doesn’t preserve the state of my previously deployed stage.
Is there a way to make SST use the existing deployed stack and avoid recreating everything? Would appreciate any help on this!
Thanks!
]]>I have two questions:
I’m using the following setup:
sst.config.ts:
/// <reference path="./.sst/platform/config.d.ts" />
export default $config({
app(input) {
return {
name: "my-app",
removal: input?.stage === "production" ? "retain" : "remove",
protect: ["production"].includes(input?.stage),
home: "aws"
};
},
async run() {
const db = aws.rds.Instance.get("name", "existing-db-id");
// Attempting to import an existing VPC
const vpc = new aws.ec2.Vpc("importedVpc", {}, {
import: "vpc-xxxxx"
});
const api = new sst.aws.ApiGatewayV2("MyAPI", {
vpc: {
securityGroups: ["sg-xxxxx"],
subnets: ["subnet-xxxxx", "subnet-xxxxx"]
},
transform: {
route: {
args: { auth: { iam: false } }
}
}
});
api.route("GET /test", {
link: [db],
handler: "path/to/handler"
});
}
});
handler.js:
import { pool } from "./postgres.js";
export async function handler() {
try {
const res = await pool.query("SELECT NOW() as current_time");
return {
statusCode: 200,
body: JSON.stringify({
message: "Test successfully!",
dbTime: res.rows[0].current_time
})
};
} catch (err) {
console.error("DB Error:", err);
return {
statusCode: 500,
body: JSON.stringify({ error: "Database connection failed." })
};
}
}
postgres.js:
import { Pool } from "pg";
export const pool = new Pool({
host: "hardcoded", // <-- How can I dynamically link this?
port: 5432,
user: "hardcoded",
password: "hardcoded",
database: "hardcoded",
max: 5,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
ssl: false
});
If I create the database via SST, I can use Resources.Db.endpoint — but what’s the best way to handle this when using aws.rds.Instance.get()?
I’ve also tried creating both the RDS and Bastion host via SST and it works — the Lambda function can access the RDS — but I’m not sure how to tunnel through the Bastion to connect using pgAdmin from my local machine.
Feel free to suggest improvements, better practices, or even alternative IaC tools.
Thanks in advance! ![]()
npx sst dev, I see an error ReferenceError: sst is not defined although SST is installed and project is initialized with npx sst@latest init.time=2025-05-23T19:23:48.210+02:00 level=INFO msg="checking for pulumi" path=/home/user1/.config/sst/bin/pulumi
time=2025-05-23T19:23:48.850+02:00 level=INFO msg="checking for bun" path=/home/user1/.config/sst/bin/bun
time=2025-05-23T19:23:48.857+02:00 level=INFO msg="initializing project" version=3.16.0
time=2025-05-23T19:23:48.858+02:00 level=INFO msg="esbuild building" out=/home/user1/aws-nextjs/.sst/platform/sst.config.1748021028858.mjs
time=2025-05-23T19:23:48.870+02:00 level=INFO msg="esbuild built" outfile=/home/user1/aws-nextjs/.sst/platform/sst.config.1748021028858.mjs
time=2025-05-23T19:23:48.871+02:00 level=INFO msg="evaluating config"
time=2025-05-23T19:23:49.009+02:00 level=INFO msg="config evaluated"
time=2025-05-23T19:23:49.009+02:00 level=ERROR msg="exited with error" err="Error evaluating config: exit status 1\nfile:///home/user1/aws-nextjs/.sst/platform/sst.config.1748021028858.mjs:13\nvar bucket = new sst.aws.Bucket(\"MyBucket1\", {\n ^\n\nReferenceError: sst is not defined\n at file:///home/user1/aws-nextjs/.sst/platform/sst.config.1748021028858.mjs:13:14\n at ModuleJob.run (node:internal/modules/esm/module_job:195:25)\n at async ModuleLoader.import (node:internal/modules/esm/loader:336:24)\n at async loadESM (node:internal/process/esm_loader:34:7)\n at async handleMainPromise (node:internal/modules/run_main:106:12)\n\nNode.js v18.19.1\n"
✕ Unexpected error occurred. Please run with --print-logs or check .sst/log/sst.log if available.
]]>export const userPool = new sst.aws.CognitoUserPool(
"UserPool",
{
usernames: ["email"],
verify: {
emailSubject: "Verify your email for our app",
emailMessage: "Hello {username}, your verification code is {####}",
},
transform: {
userPool: (args, _opts) => {
args.passwordPolicy = {
minimumLength: 10,
requireUppercase: false,
requireSymbols: false,
requireNumbers: false,
requireLowercase: false,
temporaryPasswordValidityDays: 7,
}
}
}
}
]]>sst deploy, not sst dev
]]>We are able to see the SST state files generated in s3 and infer whether they were created before an arbitrary date (based on the ‘LastModifiedDate’) and should therefore be cleaned up.
In v2, a Cloudwatch stack was created and removing them based on date was trivial. Is there a way in v3 that we can remove the stack without having the original sts.config.js file?
We would prefer not to use the console to perform these actions.
]]>I’m facing the same problem but with trying to update from Node 18 to Node 20. Changing the runtime seems to only update the server Lambda. There are ancilliary Lambdas that sst is building that seem to be “hard code” to Node 18.
]]>They asked to entertain your reason and I said it was for learning and I will use stripe for my upcoming projects but still no luck.
How did you guys made it.
]]>In my current SST project; I’m facing challenges with managing environment-specific configurations across multiple deployment stages (development, staging, production).
The configurations often differ significantly between environments, which makes it challenging to maintain consistency & avoid misconfigurations that could lead to unexpected behavior in production. I’m curious about strategies or tools within the SST ecosystem that can streamline this process. ![]()
The main issue arises from having to manually adjust settings for each environment; leading to increased complexity and potential for human error.
I’m looking for insights on automating environment configuration management, possibly by integrating centralized configuration services / leveraging SST’s built-in features to dynamically manage environment variables.
Detailed examples of how others have achieved a seamless transition between environments would be extremely helpful. ![]()
I’d appreciate hearing from community members who have tackled similar challenges, including any best practices or tips that have worked well for you. Checked https://v2.sst.dev/configuring-sst-Salesforce Developer Training guide related to this and found it quite informative.
For those new to managing configurations in SST, I recommend checking out the SST Configuration Guide as a starting point.
This resource offers valuable insights into how SST handles environment-specific settings and can serve as a useful reference for improving your deployment workflows.
Thank you !! ![]()
❌ SSTBootstrap failed: Resource handler returned message: "The runtime parameter of nodejs14.x is no longer supported for creating or updating AWS Lambda functions. We recommend you use a supported runtime while creating or updating functions.
Any solutions available? Tried to set runtime for lambda but not working
]]>plugins: [react()], is throwing an error under react and hovering over it shows me this:
No overload matches this call.
The last overload gave the following error.
Type ‘PluginOption’ is not assignable to type ‘PluginOption’.
Really at a loss for how to solve this, googling it hasn’t gotten me anywhere
]]>so either let me know if theres any existing solution or example that solved my problem or please guide me how can i implement this lambda power tool.
a really short and abstracted example might do the trick.
once again to Avoid ending up with so many lambdas is there any way i can handle similar routes like:
Appreciate any help !!!
]]>shopt -s globstar before runningnpx replace-in-file /monorepo-template/g notes **/*.* --verbose, or in some shells the replacement won’t happen.
It’s the case for my bash in both Mac and Ubuntu.
]]>Works fine on 3.2.76.
I noticed there was this issue created today Auth component does not work in dev mode (Live) · Issue #5034 · sst/sst · GitHub, and also a few people having this same problem on discourse, so its not just me.
Rollback to 3.2.76 will work around the problem until something can be fixed on 3.3.x.
]]>{statusCode: 200, body: "Connected"}. When I run under npx sst dev and try to GET it, all I get is:
{"body":"sst dev is not running"}
If I disable dev mode as follows:
const httpApi = new sst.aws.ApiGatewayV2("MyApi");
httpApi.route("GET /", {
handler: "packages/functions/src/connect.main",
dev: false
});
It does work!
Wondering if its an IAM permissions thing, but the IAM user used for deployment has full AdministratorAcces set up on AWS.
Why is the live function feature not working?
]]>I have stopped the sst dev process, but still memory and cpu are high for node process. I verified no other process is running.
The parent process is pulumi-language-nodejs
]]>From their docs:
“The CLI currently supports macOS, Linux, and WSL. Windows support is coming soon.”
I tried refreshing and still no dice. Any ideas?
% sst refresh --stage test-stage
SST 3.2.33 ready!
➜ App: MYAPP
Stage: test-stage
~ Refresh
✓ No changes
% sst remove --stage test-stage
SST 3.2.33 ready!
➜ App: MYAPP
Stage: test-stage
~ Remove
✓ No resources to remove
]]>Environment: Mac mini (m1), Sonoma, sst 3.2.11, node v21.7.3
Code: ion/examples/aws-svelte-kit at dev · sst/ion · GitHub
…
npx sst upgrade
npm update
npx sst deploy --stage prod
… standard build info up to the error …
| Created MyWeb sst:aws:SvelteKit → MyWebBuild sst:Run (1.4s)
| Created LambdaEncryptionKey random:index:RandomBytes
| Created MyWeb sst:aws:SvelteKit → MyWebServer sst:aws:Function
| Error
| Error: read /Users/tonysamsom/Desktop/ion/examples/aws-svelte-kit/.svelte-kit/svelte-kit-sst/prerendered: is a directory
| at IncomingMessage. (file:///Users/tonysamsom/Desktop/ion/examples/aws-svelte-kit/.sst/platform/src/components/rpc/rpc.ts:42:22)
| at IncomingMessage.emit (node:events:526:35)
| at IncomingMessage.emit (node:domain:488:12)
| at endReadableNT (node:internal/streams/readable:1408:12)
| at processTicksAndRejections (node:internal/process/task_queues:82:21) {
| promise: Promise { [Circular *1] }
| }
| Created MyWeb sst:aws:SvelteKit → MyWebServerCachePolicy aws:cloudfront:CachePolicy
| Created MyWeb sst:aws:SvelteKit → MyWebServerLogGroup aws:cloudwatch:LogGroup
| Created MyWeb sst:aws:SvelteKit → MyWebAssetFiles sst:aws:BucketFiles (1.3s)
| Created MyWeb sst:aws:SvelteKit → MyWebServerRole aws:iam:Role (1.3s)
| Created MyWeb sst:aws:SvelteKit → MyWebCdn sst:aws:CDN
| Created MyWeb sst:aws:SvelteKit → MyWebCloudfrontFunctionServerCfFunction aws:cloudfront:Function (2.9s)
…
…… downgrading sst with the example code successfully deploys
npx sst upgrade 3.1.78
npm update
npx sst deploy --stage prod
…
✓ Complete
MyWeb: https://d2a8v20tfn3pxn.cloudfront.net
I am working on a serverless project using SST and deploying with Seed & i am facing some challenges managing multiple environment; While SST and Seed seem to handle this well I have found it tricky to keep the configurations consistent across these environments without duplicating code or running into issues with environment variable.
For those who have deployed multi-environment setup with SST and Seed how do u manage configurations and secrets effectively across different stages?? Do you use a particular strategy to keep things DRY & avoid issues when scaling the app?
Also i have check this resorse & article https://discourse.sst.dev/t/how-are-sst-serverless-framework-meant-to-be-used-together/learn looker
Thank you.
]]>Code looks below:
const identitypool = new sst.aws.CognitoIdentityPool(“test”,{
userPools:[
{
userPool: userPool.id,
client: client.id
}
]
});
npm install sst@two --save-exact
…and that seems to install version 2.43.7 of sst instead of 3.1.38. It seems like version 2 of sst does work on Windows, but it doesn’t work with the current guide. The next error I get when I run…
npx sst dev
…is
PS C:\Dev\notes> npx sst dev
Error: $config is not defined
Trace: ReferenceError: $config is not defined
at file:///C:/Dev/notes/.sst.config.1727266767391.mjs:42:26
at ModuleJob.run (node:internal/modules/esm/module_job:218:25)
at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)
at async load (file:///C:/Dev/notes/node_modules/sst/stacks/build.js:101:21)
at async file:///C:/Dev/notes/node_modules/sst/project.js:49:40
at async initProject (file:///C:/Dev/notes/node_modules/sst/project.js:43:35)
at async file:///C:/Dev/notes/node_modules/sst/cli/program.js:36:9
at process. (file:///C:/Dev/notes/node_modules/sst/cli/sst.js:58:21)
at process.emit (node:events:530:35)
at process.emit (node:domain:488:12)
at process._fatalException (node:internal/process/execution:178:25)
at processPromiseRejections (node:internal/process/promises:289:13)
at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
I have created a simple Next.js site that contains an api route (app router). The api route accepts a payload via POST with some data. Then it calls the OpenAI api to retrieve a response. I want to stream that response back to my client component (similar to the ChatGPT interface). I have this working when running my SST stack in dev, but once I deploy it to prod, the response is no longer streamed, it just returns the full data once complete. Here is the code that does the stream/response:
const headers = new Headers({
"Content-Type": "text/plain; charset=utf-8",
"Transfer-Encoding": "chunked",
});
const stream = new ReadableStream({
async start(controller) {
try {
const openaiStream = await openai.chat.completions.create({
model: model.id,
messages,
stream: true,
});
for await (const chunk of openaiStream) {
const content = chunk.choices[0]?.delta?.content || "";
controller.enqueue(new TextEncoder().encode(content));
}
controller.close();
} catch (error) {
console.error("Error calling OpenAI API:", error);
controller.error(error);
}
},
});
return new NextResponse(stream, { headers });
]]>Recently I posted a query on this website- https://discourse.sst.dev/t/akismet-has-temporarily-hidden-your-post/2970
After few minutes, I got a notification that left me disappointed as it shows my content is Hidden. I don’t know what is the reason behind it but I didn’t do anything wrong that violate community guideline. As a professional, I respect community guideline and make useful contribution to help community members. As my query is genuine and I really need a quick solution to that and I don’t think there is better platform than this to solve my query.
Kindly have a look at my post and make it live again.
]]>I am working on integrating SST into an existing serverless application and could use some guidance on best practices. My application is built on AWS; utilizing Lambda; DynamoDB; API Gateway; and S3; with a deployment pipeline already set up using CI/CD. I have read through the SST documentation; but there are a few areas where I’m hoping to get some insights from those with experience in this domain.
My current application has a standard serverless.yml setup. How should I approach restructuring my project to incorporate SST? Should I migrate everything into SST at once; or is there a way to incrementally adopt SST? ![]()
One of the key reasons I am exploring SST is for its live Lambda debugging capabilities. Could anyone share their experiences with this feature; especially in terms of how it integrates with existing testing frameworks like Jest or Mocha? Are there any pitfalls I should be aware of? ![]()
My application is deployed across multiple environments. How does SST handle environment-specific configurations; especially when it comes to managing secrets and environment variables? ![]()
Also, I have gone through this post; https://discourse.sst.dev/t/provide-sst-construct-for-api-gateway-integration-mlops/ which definitely helped me out a lot.
I am particularly interested in hearing about real world experiences and any lessons learned during the migration process.
Thank you in advance for your help and assistance. ![]()
{
actions: ["s3:*"],
resources: [
$concat(bucket.arn, "/private/${cognito-identity.amazonaws.com:sub}/*"),
],
},
]]>new iam.PolicyStatement({
actions: ["s3:*"],
effect: iam.Effect.ALLOW,
resources: [
bucket.bucketArn + "/private/${cognito-identity.amazonaws.com:sub}/*",
],
}),
where this code need to be placed
]]>ExampleStack Cluster/Cluster AWS::RDS::DBCluster CREATE_FAILED Resource handler returned message: "The engine mode serverless you requested is currently unavailable. (Service: Rds, Status Code: 400, Request ID: 1234)" (RequestToken: 1234, HandlerErrorCode: InvalidRequest)
After reading your “Moving away from CDK” blog post, I understand that fixing this issue won’t be worth anyone’s time.
I’ll have a look at the Ion-based setup.
Hope this comment spared someone time before going to deep debugging it ![]()
The launch.json config needs to change from
"runtimeArgs": ["start", "--increase-timeout"],
to
"runtimeArgs": ["dev"],
to start.
Haven’t figured out how to increase the timeout so far.
Also there is following warning:
Warning: You are using a global installation of SST but you also have a local installation specified in your package.json. The local installation will be used but you should typically run it through your package manager.
I also created an issue for it.
]]>I am currently working through the SST intro notes tutorial.
Thanks, for the nice introduction!
There are just two things, I struggled a bit so far:
AWS Accounts
It would have been great to see Adam’s video about AWS federated accounts before starting. Spent a lot of time to set up Accounts and IAM users the wrong way before starting. Maybe link it in the first chapter, before it’s going into the weeds?
CLI Tools
For the cli calls, it would be nice to have a .env file with all the necessary variables to be filled out during the tutorial.
.env:
API_ENDPOINT=https://YOUR-STACK-ID.execute-api.us-east-1.amazonaws.com
USER_POOL_CLIENT_ID=
IDENTITY_POOL_ID=
API_REGION=us-east-1
COGNITO_REGION=us-east-1
USER_POOL_ID=
STRIPE_SECRET_TEST_KEY=fill out
STRIPE_PUBLISHABLE_KEY=fill out
And then the CLI calls in little shell scripts, loading the .env file. E.g. 01_test_user_signup.sh:
source .env
AWS_PROFILE=plain-dev-sst aws cognito-idp sign-up \
--region ${COGNITO_REGION} \
--client-id ${USER_POOL_CLIENT_ID} \
--username [email protected] \
--password Passw0rd!
02_test_user_admin_confirm_signup.sh:
source .env
AWS_PROFILE=plain-dev-sst aws cognito-idp admin-confirm-sign-up \
--region ${COGNITO_REGION} \
--user-pool-id ${USER_POOL_ID} \
--username [email protected]
This would reduce room for error.
I hope this thread isn’t some kind of /dev/null btw.
Thanks for considering, and thanks for the great framework so far!
Greets,
Andreas
I use the Amazon-cognito-passwordless-auth package in my project. Does it work with your framework?
]]>npm run deploy -- --stage=production I get the following error:
ReactSite-sebastian/S3Uploader: Resource handler returned message: "The runtime parameter of python3.7 is no longer supported for creating or updating AWS Lambda funct
ions. We recommend you use the new runtime (python3.12) while creating or updating functions. (Service: Lambda, Status Code: 400, Request ID: 06729a42-d421-4cb1-8ed8-7982f15c39cf)" (RequestToken: 9805f32c-22a1-dc80-a13c-9874c541cb11, HandlerErrorCode: InvalidRequest)
sst.config.ts
export default {
config(input) {
const PROFILE: Record<string, string> = {
dev: "staging",
production: "production",
default: "admin",
}
console.log("stage:", input.stage, input);
return {
name: "sst-app",
region: "eu-central-1",
profile: "default",
stage: input.stage,
}
},
stacks(app: App) {
app.setDefaultFunctionProps({
runtime: "nodejs18.x",
architecture: "arm_64",
});
// Remove all resources when non-prod stages are removed
if (app.stage !== "production") {
app.setDefaultRemovalPolicy(RemovalPolicy.DESTROY);
}
app
.stack(StorageStack)
.stack(AuthStack)
.stack(ApiStack)
.stack(WebsocketApiStack)
.stack(FrontendStack);
},
} satisfies SSTConfig
The funny thing is that I dont even use python for lambda functions, I only use typescript for lambda functions. I think it might have gotten confused by my single utility python files that reside in my project.
]]>