Serverless Architecture Conference https://serverless-architecture.io/ A Conference for Mastering Cloud Native Architectures, the Kubernetes Ecosystem and Functions Thu, 28 Aug 2025 09:14:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Building a Serverless API With Email Notifications in AWS with Terraform https://serverless-architecture.io/blog/serverless-crud-api-aws-terraform-github-actions/ Tue, 19 Aug 2025 14:25:31 +0000 https://serverless-architecture.io/?p=89283 Welcome to the exciting world of serverless architecture on AWS! In this engaging journey, we’ll craft a sophisticated system using Terraform and GitHub Actions. You will learn how to build a serverless CRUD API with NodeJS and Go. Discover the simplicity and scalability that can be achieved by combining AWS services and infrastructure as code (IAC) with Terraform.

The post Building a Serverless API With Email Notifications in AWS with Terraform appeared first on Serverless Architecture Conference.

]]>
Buckle up as we embark on an adventure through serverless NodeJS and Go.

Our main project will be a serverless architecture featuring API Gateway, Lambdas, DynamoDB, SNS, and SQS. We’ll work with Terraform, our infrastructure-as-code tool, and use GitHub Actions for our continuous integration and deployment, where we’ll deploy our infrastructure and lambda apps with a simple push to the main branch.

We’ll organize our HTTP methods, configure the AWS provider in Terraform, and even set up an S3 bucket for storing our precious Terraform state.

Our first task? Building a Lambda to fetch a movie by its ID. We’ll create modular and reusable Terraform code, deploy it using GitHub Actions, write a Lambda using NodeJS, and link it to DynamoDB.

Let the adventure begin!

Requirements:

  • An AWS account
  • Any code editor of your choice — I use Visual Studio Code
  • NodeJS
  • GitHub account — We’ll use GitHub Actions to deploy our Terraform code

Regarding AWS costs: Everything we’ll be using is free or has a very low cost that will not be charged unless you add a very high usage. If you are afraid about having unknown charges, you can set up a $0.01 budget to alert you if you are being charged for anything.

The Project

We will build an entire serverless architecture.

  • API Gateway — This is where the endpoints will be mapped and exposed.
  • Lambdas — They will handle API Gateway events and the SQS events.
  • DynamoDB — It will be our database.
  • SQS — Our message queue, where the email notification lambda will be notified whenever a movie is created, deleted, or updated.
  • SNS — Notification services to send events to SQS for a fanout pattern
  • SES — AWS Simple Email System to manage and send emails from AWS.

 

 

build an entire serverless architecture.

Fig. 1

We’ll also be using:

  • Terraform — Our infrastructure as a code that will create and manage our whole AWS infrastructure.
  • GitHub Actions — Our CI/CD, which will build and deploy our infrastructure and our lambdas.

Why serverless? Serverless computing is a cloud computing model where you don’t have to provision or manage servers. Instead, the cloud provider automatically manages the infrastructure, allowing developers to focus solely on writing code and deploying applications. The term “serverless” doesn’t mean no servers are involved. It means you don’t have to worry about the underlying server infrastructure.

Some of the benefits of serverless are:

  • Cost Savings — You only pay for the computing resources your code consumes.
  • Scalability — Serverless platforms automatically scale your applications based on demand without manual intervention.
  • Zero Idle Capacity — Your resources are only allocated when needed, so you won’t have provisioned resources without being used.

Let’s begin our project. We are adding our first lambda to get a movie by its ID.

 

Lambda module

Create a folder for your project, and inside it, create a folder named iac. This is where we’ll be adding all our infrastructure as a code. Now create a new folder inside it named modules. Here, we’ll be adding our reusable terraform modules. And now, add a folder lambda for our Lambda function module. Inside the lambda folder, create three files, main.tf, datasources.tf and variables.tf .

  • main.tf — will hold the main code for our module. Resources declaration, other modules usage, etc.
  • datasources.tf — will hold any data that might need to be generated, transformed, or imported.
  • variables.tf — It defines all the input variables for our module.

For the main.tf file, add the following code:

resource "aws_iam_role" "iam_for_lambda" {
 name               = "${var.name}-lambda-role"
 assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_lambda_function" "lambda" {
 filename      = data.archive_file.lambda.output_path
 function_name = var.name
 role          = aws_iam_role.iam_for_lambda.arn
 handler       = var.handler
 runtime       = "nodejs20.x"
}

We also declare a role, which all lambda functions need, and the lambda code itself in aws_lambda_function. Note the keywords data and var. The first is for data from the data sources, and the second is for anything passed to the module through variables.

Now for the variables.tf:

variable "name" {
 description = "The name of the Lambda function"
 type        = string
 nullable    = false
}

variable "handler" {
 description = "The handler function in your code for he Lambda function"
 type        = string
 default     = "index.handler"
}

And for the datasources.tf :

locals {
 filename = strcontains(var.runtime, "node") ? "index.mjs" : "main"
}

data "archive_file" "lambda" {
 type        = "zip"
 source_file = "./modules/lambda/init_code/${local.filename}"
 output_path = "${var.name}_lambda_function_payload.zip"
}

data "aws_iam_policy_document" "assume_role" {
 statement {
   effect = "Allow"

   principals {
     type        = "Service"
     identifiers = ["lambda.amazonaws.com"]
   }

   actions = ["sts:AssumeRole"]
 }
}

Here, we define the IAM policy for the lambda role and the file that will be added to the lambda. This file is required for our Terraform code, even if you are doing deployment in a different flow, which you will be doing. This is also seen in the filename local variable, where we assign the file depending on the lambda runtime. So, let’s add our seed code.

If you’d like to enable logging to CloudWatch, you can add this policy document:

data "aws_iam_policy_document" "lambda_logging" {
 statement {
   effect = "Allow"

   actions = [
     "logs:CreateLogGroup",
     "logs:CreateLogStream",
     "logs:PutLogEvents",
   ]

   resources = ["arn:aws:logs:*:*:*"]
 }
}

And then, in the main.tf , add the policy attachment:

resource "aws_iam_policy" "lambda_logging" {
 name        = "lambda_logging_${aws_lambda_function.lambda.function_name}"
 path        = "/"
 description = "IAM policy for logging from a lambda"
 policy      = data.aws_iam_policy_document.lambda_logging.json
}

resource "aws_iam_role_policy_attachment" "lambda_logs" {
 role       = aws_iam_role.iam_for_lambda.name
 policy_arn = aws_iam_policy.lambda_logging.arn
}

Create a folder named init_code under the lambda module folder. For the Node.js seed code, you can create a new file index.mjs and add the following code:

// Default handler generated in AWS
export const handler = async (event) => {
 const response = {
   statusCode: 200,
   body: JSON.stringify('Hello from Lambda!'),
 };
 return response;
};

Note that it needs to be mjs file, because we are not adding a project.json file to define the module. The file needs to be with this extension so Node.js will handle the code as ECMAScript modules.

Adding the main infra code

In the iac folder, create a lambdas.tf file with the following code:

module "get_movie_lambda" {
 source        = "./modules/lambda"
 name          = "get-movie"
 runtime       = "nodejs20.x"
 handler       = "index.handler"
}

We also need to configure Terraform to use AWS as its provider. Create a provider.tf file with the following code:

terraform {
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = "~> 5.0"
   }
 }
}

# Configure the AWS Provider
provider "aws" {
 region = var.region
}

And now, create two files, variables.tf to declare the default variables of our IaC:

variable "region" {
 description = "Default region of your resources"
 type        = string
 default     = "eu-central-1"
}

And for variables.tfvars to pass variable values that are not secret, but we might want to change depending on the deployment configuration:

region="eu-central-1" // Chage here to your region here

If you’d like Terraform to keep track of the changes to update the components, you need to add where it will save and manage the state. Here, we’ll be using an S3 bucket for that. Create an S3 bucket with the name terraform-medium-api-notification and modify the provider.tf file with the following code:

terraform {
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = "~> 5.0"
   }
 }

 backend "s3" {
   bucket = "terraform-medium-api-notification"
   key    = "state"
   region = "eu-central-1" // Chage here to your region here
 }
}

# Configure the AWS Provider
provider "aws" {
 region = var.region
}

Note that you can choose the region that you are nearest to instead of eu-central-1. I just chose it because it is the closest to me. We are ready to build the workflow to deploy our infrastructure to AWS.

 

Deploying the infrastructure

To deploy our infrastructure, we’ll be using Github Actions. The CI solutions in GitHub allow us to run scripts in our code when we change it. If you’d like to know more about it, check the documentation here.

To perform this step, you’ll need to generate an AWS Access Key and Secret for a user that has the rights to create the resources you define in AWS. Add these secrets to your repository action secrets in your in Settings:

 

let’s create a .github folder

Fig. 2

Now, in the root folder, let’s create a .github folder and a workflows folder inside of it. Create a file named deploy-infrastructure.yml and add the following code:

name: Deploy Infrastructure
on:
 push:
   branches:
     - main
   paths:
     - iac/**/*
     - .github/workflows/deploy-infra.yml

defaults:
 run:
   working-directory: iac/

jobs:
 terraform:
   name: "Terraform"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1 # Use your preferred region

     # Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
     - name: Setup Terraform
       uses: hashicorp/setup-terraform@v3

     # Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
     - name: Terraform Init
       run: terraform init

     # Checks that all Terraform configuration files adhere to a canonical format
     - name: Terraform Format
       run: terraform fmt -check

     # Generates an execution plan for Terraform
     - name: Terraform Plan
       run: terraform plan -out=plan -input=false -var-file="variables.tfvars"

       # On push to "main", build or change infrastructure according to Terraform configuration files
       # Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
     - name: Terraform Apply
       run: terraform apply -auto-approve -input=false plan

Every time you push a change to a file inside the iac folder, it will trigger this action, and will automatically generate the resources in AWS for you:

Terraform Plan step

Fig. 3

Note that in the Terraform Plan step, terraform outputs all the changes it will perform in AWS based on the current state in the S3 bucket. Now you can go to AWS on the Lambda page and see your recently created function:

Test tab

Fig. 4

To test it, click on it and then on the Test tab.

Fig. 5

There, you can give a name to the test event that will be sent to the Lambda and then click on the Test button.

Fig. 6

You should see a success notification with the return from the lambda:

Fig. 7

Now, let’s add a GET endpoint through API Gateway so we can call our lambda through HTTP requests. Let’s first create a module for our HTTP methods. Under the folder modules, create a folder rest-api-method. Then, create three files: main.tf, variables.tf, and outputs.tf.

For the variables.tf, add the following code:

variable "http_method" {
 description = "The HTTP method"
 type        = string
}

variable "resource_id" {
 description = "The ID of the resource this method is attached to"
 type        = string
}

variable "api_id" {
 description = "The ID of the API this method is attached to"
 type        = string
}

variable "integration_uri" {
 description = "The URI of the integration this method will call"
 type        = string
}

variable "resource_path" {
 description = "The path of the resource"
 type        = string
}

variable "lambda_function_name" {
 description = "The name of the Lambda function that will be called"
 type        = string
}

variable "region" {
 description = "The region of the REST API resources"
 type        = string
}

variable "account_id" {
 description = "The ID of the AWS account"
 type        = string
}

For the outputs.tf:

output "id" {
 value = aws_api_gateway_method.method.id
}

output "integration_id" {
 value = aws_api_gateway_integration.integration.id
}

Now for the main.tf :

resource "aws_api_gateway_method" "method" {
 authorization = "NONE"
 http_method   = var.http_method
 resource_id   = var.resource_id
 rest_api_id   = var.api_id
}

resource "aws_api_gateway_integration" "integration" {
 http_method             = aws_api_gateway_method.method.http_method
 integration_http_method = "POST" # Lambda functions can only be invoked via POST
 resource_id             = var.resource_id
 rest_api_id             = var.api_id
 type                    = "AWS_PROXY"
 uri                     = var.integration_uri
}

resource "aws_lambda_permission" "apigw_lambda" {
 statement_id  = "AllowExecutionFromAPIGateway"
 action        = "lambda:InvokeFunction"
 function_name = var.lambda_function_name
 principal     = "apigateway.amazonaws.com"
 source_arn    = "arn:aws:execute-api:${var.region}:${var.account_if}:${var.api_id}/*/${aws_api_gateway_method.method.http_method}${var.resource_path}"
}

This will generate an HTTP method attached to your API and use lambda proxy integration. We want our Lambda to be responsible for the HTTP behavior of the request and response. So, only the API Gateway will pass it through.

Now, in the root iac folder, create a rest-api.tf file and add the following code:

# API Gateway
resource "aws_api_gateway_rest_api" "movies_api" {
 name = "movies-api"
}

resource "aws_api_gateway_deployment" "movies_api_deployment" {
 rest_api_id = aws_api_gateway_rest_api.movies_api.id
 stage_name  = aws_api_gateway_stage.live.stage_name

 triggers = {
   redeployment = sha1(jsonencode([
     aws_api_gateway_resource.movies_root_resource.id,
     module.get_movie_method.id,
     module.get_movie_method.integration_id,
   ]))
 }

 lifecycle {
   create_before_destroy = true
 }
}

resource "aws_api_gateway_stage" "live" {
 deployment_id = aws_api_gateway_deployment.movies_api_deployment.id
 rest_api_id   = aws_api_gateway_rest_api.movies_api.id
 stage_name    = "live"
}

resource "aws_api_gateway_resource" "movies_root_resource" {
 parent_id   = aws_api_gateway_rest_api.movies_api.root_resource_id
 path_part   = "movies"
 rest_api_id = aws_api_gateway_rest_api.movies_api.id
}

module "get_movie_method" {
 source               = "./modules/rest-api-method"
 api_id               = aws_api_gateway_rest_api.movies_api.id
 http_method          = "GET"
 resource_id          = aws_api_gateway_resource.movies_root_resource.id
 resource_path        = aws_api_gateway_resource.movies_root_resource.path
 integration_uri      = module.get_movie_lambda.invoke_arn
 lambda_function_name = module.get_movie_lambda.name
 region               = var.region
 account_id           = vat.account_id
}

In the variables.tf, add the variable for account_id:

variable "account_id" {
 description = "The ID of the default AWS account"
 type        = string
}

You can either add your account ID to the variables.tfvars files, or you can add an environment variable with prefix TF_VAR_, Terraform will convert it to the name after the prefix, so TF_VAR_account_id to add the workflow Terraform Plan and the Terraform Apply steps:

jobs:
  terraform:
    name: 'Terraform'
    runs-on: ubuntu-latest
    env:
      TF_VAR_account_id: YOUR_ACCOUNT_ID

Just remember to replace the value of YOUR_ACCOUNT_ID for your account account ID value. This will generate an API with the resource movies, which will be the path /movies of your API. It will also create a stage named live. Stages are the equivalent of deployment environments. You need the API deployed to a stage to call it. So, for our movies endpoint, it will be /live/movies.

Then, it will create a deployment that will configure a rule that it should deploy to the live stage whenever we make changes to the method, integration, or resource.
Now, push it to GitHub and wait for the workflow and your API to be created. After it is finished, you can go to the API Gateway page of AWS and see your API.

Fig. 8

And when you click on it, you can see all the details about the resources:

Fig. 9

To see the public URL, you can go to the Stages section:

Fig. 10

Now, if you call the /movies in your browser, you should get the response from the Lambda:

 

Fig. 11

We must make one adjustment to ensure we use the correct path. We created the resource /movies and added the method GET there, but our lambda will fetch a movie by ID in the future, so we need to create a new resource to attach our lambda to it correctly.

So, let’s create a new resource by adding the following code to the root rest-api.tf file:

resource "aws_api_gateway_resource" "movie_resource" {
 parent_id   = aws_api_gateway_resource.movies_root_resource.id
 path_part   = "{movieID}"
 rest_api_id = aws_api_gateway_rest_api.movies_api.id
}

Add it to the redeployment trigger in the movies_api_deployment:

resource "aws_api_gateway_deployment" "movies_api_deployment" {
 rest_api_id = aws_api_gateway_rest_api.movies_api.id

 triggers = {
   redeployment = sha1(jsonencode([
     aws_api_gateway_resource.movies_root_resource.id,
     aws_api_gateway_resource.movie_resource.id,
     module.get_movie_method.id,
     module.get_movie_method.integration_id,
   ]))
 }

 lifecycle {
   create_before_destroy = true
 }
}

And then modifying the get_movie_method module to point to the new resource:

module "get_movie_method" {
 source               = "./modules/rest-api-method"
 api_id               = aws_api_gateway_rest_api.movies_api.id
 http_method          = "GET"
 resource_id          = aws_api_gateway_resource.movie_resource.id
 resource_path        = aws_api_gateway_resource.movie_resource.path
 integration_uri      = module.get_movie_lambda.invoke_arn
 lambda_function_name = module.get_movie_lambda.name
}

Push the code to GitHub, and Terraform will modify your infrastructure. Your API should look like this:

Fig. 12

Then, you can call the URL now with the ID /movies/123, for example, and you should get the same result as before.

Adding DynamoDB: Now that we have a functioning API let’s add our database, DynamoDB, and hook it to our GET endpoint with some seed data.

Terraforming DynamoDB: So, let’s start by adding a new file to our iac folder named dynamodb.tf with the following code:

resource "aws_dynamodb_table" "movies-table" {
 name           = "Movies"
 billing_mode   = "PROVISIONED"
 read_capacity  = 1
 write_capacity = 1
 hash_key       = "ID"
 range_key      = "Title"

 attribute {
   name = "ID"
   type = "S"
 }
}

This will generate a minimum capacity table named Movies, and with a partition key named ID of type string. When you push the code to GitHub, and the action runs, you can go to the DynamoDB section of AWS and see the Movies table there.

Fig. 13

Let’s add a few seed items. In the dynamodb.tf file, add the following code for four table items:

resource "aws_dynamodb_table_item" "the_matrix" {
 table_name = aws_dynamodb_table.movies-table.name
 hash_key   = aws_dynamodb_table.movies-table.hash_key
 range_key  = aws_dynamodb_table.movies-table.range_key

 item = jsonencode(
   {
     ID    = { S = "1" },
     Title = { S = "The Matrix" },
     Genres = { SS = [
       "Action",
       "Sci-Fi",
       ]
     },
     Rating = { N = "8.7" }
   }
 )
}

resource "aws_dynamodb_table_item" "scott_pilgrim" {
 table_name = aws_dynamodb_table.movies-table.name
 hash_key   = aws_dynamodb_table.movies-table.hash_key
 range_key  = aws_dynamodb_table.movies-table.range_key

 item = jsonencode(
   {
     ID    = { S = "2" },
     Title = { S = "Scott Pilgrim vs. the World" },
     Genres = { SS = [
       "Action",
       "Comedy",
       ]
     },
     Rating = { N = "7.5" }
   }
 )
}

resource "aws_dynamodb_table_item" "star_wars" {
 table_name = aws_dynamodb_table.movies-table.name
 hash_key   = aws_dynamodb_table.movies-table.hash_key
 range_key  = aws_dynamodb_table.movies-table.range_key

 item = jsonencode(
   {
     ID    = { S = "3" },
     Title = { S = "Star Wars: Episode IV - A New Hope" },
     Genres = { SS = [
       "Action",
       "Adventure",
       "Fantasy",
       "Sci-Fi",
       ]
     },
     Rating = { N = "8.6" }
   }
 )
}

resource "aws_dynamodb_table_item" "star_wars_v" {
 table_name = aws_dynamodb_table.movies-table.name
 hash_key   = aws_dynamodb_table.movies-table.hash_key
 range_key  = aws_dynamodb_table.movies-table.range_key

 item = jsonencode(
   {
     ID    = { S = "4" },
     Title = { S = "Star Wars: Episode V - The Empire Strikes Back" },
     Genres = { SS = [
       "Action",
       "Adventure",
       "Fantasy",
       "Sci-Fi",
       ]
     },
     Rating = { N = "8.7" }
   }
 )
}

Now push to GitHub, wait for the workflow to run, and go to the DynamoDB Table in AWS to explore the table items and see the created records:

Fig. 14

Updating the Lambda to fetch by ID

Now that we have our data, we need to modify our lambda to fetch our items. First, we need to give our Lambda role rights to do GetItem actions in the Movies table. Add the following code to the outputs.tf file in the lambda module folder:

output "role_name" {
 value = aws_iam_role.iam_for_lambda.name
}

Now, in the iac folder, add a file named iam-polices.tf with the following code:

data "aws_iam_policy_document" "get_movie_item" {
 statement {
   effect = "Allow"

   actions = [
     "dynamodb:GetItem",
   ]

   resources = [
     aws_dynamodb_table.movies-table.arn
   ]
 }
}

resource "aws_iam_policy" "get_movie_item" {
 name        = "get_movie_item"
 path        = "/"
 description = "IAM policy allowing GET Item on Movies DynamoDB table"
 policy      = data.aws_iam_policy_document.get_movie_item.json
}

resource "aws_iam_role_policy_attachment" "allow_getitem_get_movie_lambda" {
 role       = module.get_movie_lambda.role_name
 policy_arn = aws_iam_policy.get_movie_item.arn
}

This will generate a policy that allows GetItem in the Movies table and attach it to the current lambda IAM role. Now, in the root folder, create a folder named apps, and then a folder get-movie. Inside this folder, let’s start a npm project with:

npm init -y

This will generate a new package.json file. Most packages required for the lambda to work and connect with AWS are already packed in AWS and updated occasionally. We are creating this mostly to have the packages available in our local development environment and to set up our module types.

In the package.json file, add the following property:

"type": "module"

Your file should look similar to:

{
 "name": "get-movie",
 "version": "1.0.0",
 "description": "",
 "main": "index.js",
 "type": "module",
 "scripts": {
   "test": "echo \"Error: no test specified\" && exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC"
}

Note that if the package is unavailable in the AWS environment, you must pack the node_modules folder with your lambda function code. Or create a Lambda layer that will hold the node_modules and can be shared between lambdas.

Let’s install the packages we’ll need with:

npm i --save aws-sdk @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb

Now, create a folder named src and add a file index.js in it. We’ll add the code to fetch a movie by its ID with:

import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, GetCommand } from "@aws-sdk/lib-dynamodb";

const tableName = "Movies";

export const handler = async (event) => {
 const movieID = event.pathParameters?.movieID;

 if (!movieID) {
   return {
     statusCode: 400,
     body: JSON.stringify({
       message: "Movie ID missing",
     }),
   };
 }

 console.log("Getting movie with ID ", movieID);

 const client = new DynamoDBClient({});
 const docClient = DynamoDBDocumentClient.from(client);

 const command = new GetCommand({
   TableName: tableName,
   Key: {
     ID: movieID.toString(),
   },
 });

 try {
   const dynamoResponse = await docClient.send(command);
   if (!dynamoResponse.Item) {
     return {
       statusCode: 404,
       body: JSON.stringify({
         message: "Movie not found",
       }),
     };
   }

   const body = {
     title: dynamoResponse.Item.Title,
     rating: dynamoResponse.Item.Rating,
     id: dynamoResponse.Item.ID,
   };

   body.genres = Array.from(dynamoResponse.Item.Genres);

   const response = {
     statusCode: 200,
     body: JSON.stringify(body),
   };

   return response;
 } catch (e) {
   console.log(e);

   return {
     statusCode: 500,
     body: JSON.stringify({
       message: e.message,
     }),
   };
 }
};

This lambda gets the event sent by API Gateway and extracts the movie ID. Then we do some simple validations and get the movie from DynamoDB, transform the data to an API resource so we don’t expose our data model, and return it to the API Gateway to send to the client.

You can see the documentation here if you’d like to learn more about the event from API Gateway with Lambda proxy integration. Remember to stringify the body, or you’ll face 500 errors.

Building and deploying

Lastly, we must create a quick build script to organize our code. First, install the following package:

npm i -D copyfiles

I’m using it because it makes commands to copy files independent from operating systems. In the package.json file, add the following build script:

{
 "name": "get-movie",
 "version": "1.0.0",
 "description": "",
 "main": "index.js",
 "type": "module",
 "scripts": {
   "build": "copyfiles -u 1 src/**/* build/ && copyfiles package.json build/",
   "test": "echo \"Error: no test specified\" && exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC",
 "dependencies": {
   "@aws-sdk/client-dynamodb": "^3.468.0",
   "@aws-sdk/lib-dynamodb": "^3.468.0",
   "aws-sdk": "^2.1513.0"
 },
 "devDependencies": {
   "copyfiles": "^2.4.1"
 }
}

And now, let’s add the workflow that will push our code to the get-movie lambda. Create a deploy-get-movie-lambda.yml file in the .github/workflows folder and add the following code:

name: Deploy Get Movie Lambda
on:
 push:
   branches:
     - main
   paths:
     - apps/get-movie/**/*
     - .github/workflows/deploy-get-movie-lambda.yml

defaults:
 run:
   working-directory: apps/get-movie/

jobs:
 terraform:
   name: "Deploy GetMovie Lambda"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - name: Setup NodeJS
       uses: actions/setup-node@v4
       with:
         node-version: 20

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1

     - name: Install packages
       run: npm install

     - name: Build
       run: npm run build

     - name: Zip build
       run: zip -r -j main.zip ./build

     - name: Update Lambda code
       run: aws lambda update-function-code --function-name=get-movie --zip-file=fileb://main.zip

Remember to set the correct region. Now push the code to GitHub, wait for it to run, and then call the API Gateway endpoint /movies/1. You should receive a response similar to:

{
 "id":"1",
 "title":"The Matrix",
 "rating":8.7,
 "genres":[
   "Action",
   "Sci-Fi"
 ]
}

Amazing! We have our first endpoint completed!

Implementing Create Movie endpoint

Let’s start by creating the action to create a movie. For the new lambda, let’s use Go as the runtime. So, we need to adapt our Lambda module to allow us to define it. In the variables.tf of the lambda module, let’s add a new variable for the runtime:

variable "runtime" {
 description = "The runtime for the Lambda function [nodejs20.x, go1.x]"
 type        = string
 default     = "nodejs20.x"
}

variable "name" {
 description = "The name of the Lambda function"
 type        = string
 nullable    = false
}

variable "handler" {
 description = "The handler function in your code for he Lambda function"
 type        = string
 default     = "index.handler"
}

variable "init_filename" {
 description = "The file containing the initial code for the Lambda"
 type        = string
 default     = "index.mjs"
}

For the main.tf:

resource "aws_iam_role" "iam_for_lambda" {
 name               = "${var.name}-lambda-role"
 assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_lambda_function" "lambda" {
 filename      = data.archive_file.lambda.output_path
 function_name = var.name
 role          = aws_iam_role.iam_for_lambda.arn
 handler       = var.handler
 runtime       = var.runtime
}

And for the datasources.tf, let’s modify our code to calculate the correct file to use, depending on the runtime:

data "archive_file" "lambda" {
 type        = "zip"
 source_file = "./modules/lambda/init_code/${var.init_filename}"
 output_path = "lambda_function_payload.zip"
}

data "aws_iam_policy_document" "assume_role" {
 statement {
   effect = "Allow"
   principals {
     type        = "Service"
     identifiers = ["lambda.amazonaws.com"]
   }
   actions = ["sts:AssumeRole"]
 }
}

Note that we use locals to define which init code will be used depending on the runtime. You already have the index.js file in the init_code folder. You can use the already-built Go main file in this repository. Or you can compile your own with the following code:

package main

import (
  "context"
  "github.com/aws/aws-lambda-go/lambda"
)

type MyEvent struct {
  Name string `json:"name"`
}

type Response struct {
  Body       string `json:"body"`
  StatusCode int    `json:"statusCode"`
}

func HandleRequest(ctx context.Context, event *MyEvent) (*Response, error) {
  message := Response{
     Body:       "Hello from Lambda!",
     StatusCode: 200,
  }

  return &message, nil
}

func main() {
  lambda.Start(HandleRequest)
}

You can find the code and build instructions here. Now, let’s add our module to the iac/lambdas.tf file:

module "create_movie_lambda" {
 source        = "./modules/lambda"
 name          = "create-movie"
 runtime       = "go1.x"
 handler       = "main"
}

To see a full list of all supported runtimes, check the documentation here.

Let’s also give our lambda permissions to add items to our table. In the iac/iam-policies.tf add:

data "aws_iam_policy_document" "put_movie_item" {
 statement {
   effect = "Allow"

   actions = [
     "dynamodb:PutItem",
   ]

   resources = [
     aws_dynamodb_table.movies-table.arn
   ]
 }
}

resource "aws_iam_policy" "put_movie_item" {
 name        = "put_movie_item"
 path        = "/"
 description = "IAM policy allowing PUT Item on Movies DynamoDB table"
 policy      = data.aws_iam_policy_document.put_movie_item.json
}

resource "aws_iam_role_policy_attachment" "allow_putitem_create_movie_lambda" {
 role       = module.create_movie_lambda.role_name
 policy_arn = aws_iam_policy.put_movie_item.arn
}

We need to add a new method to our iac/rest-api.tf file to link it to our lambda:

module "create_movie_method" {
 source               = "./modules/rest-api-method"
 api_id               = aws_api_gateway_rest_api.movies_api.id
 http_method          = "POST"
 resource_id          = aws_api_gateway_resource.movies_root_resource.id
 resource_path        = aws_api_gateway_resource.movies_root_resource.path
 integration_uri      = module.create_movie_lambda.invoke_arn
 lambda_function_name = module.create_movie_lambda.name
 region               = var.region
 account_id           = var.account_id
}

And then add the create_movie_method configuration to our deployment resource in the same file:

resource "aws_api_gateway_deployment" "movies_api_deployment" {
 rest_api_id = aws_api_gateway_rest_api.movies_api.id

 triggers = {
   redeployment = sha1(jsonencode([
     aws_api_gateway_resource.movies_root_resource.id,
     aws_api_gateway_resource.movie_resource.id,
     module.get_movie_method.id,
     module.get_movie_method.integration_id,
     module.create_movie_method.id,
     module.create_movie_method.integration_id,
   ]))
 }

 lifecycle {
   create_before_destroy = true
 }
}

Now push the code to GitHub and see the lambda, and API be created:

Fig. 15

New API endpoint

Fig. 16

Lambda functions

You can test it by making a POST HTTP request to /movies, you should get a response with status 200 and a body like this:

Hello from Lambda!

Now, we need to code our lambda. In the folder apps, create a new folder named create-movie. Navigate to the folder and run the following code to initialize a new go module:

go init example-movies.com/create-movie

Then, run the following code to get the necessary packages we’ll require:

go get "github.com/aws/aws-lambda-go"
go get "github.com/aws/aws-sdk-go"
go get "github.com/google/uuid"

Now, let’s set our models in a models.go file: package main

type Request struct {
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type Response struct {
  ID     string   `json:"id"`
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type ErrorResponse struct {
  Message string `json:"message"`
}

type Movie struct {
  ID     string   `dynamodbav:",string"`
  Title  string   `dynamodbav:",string"`
  Genres []string `dynamodbav:",stringset"`
  Rating float64  `dynamodbav:",number"`
}

And then for our Lambda, a simple, straightforward implementation:

package main

import (
  "context"
  "encoding/json"

  "github.com/aws/aws-lambda-go/events"
  "github.com/aws/aws-lambda-go/lambda"
  "github.com/aws/aws-sdk-go/aws"
  "github.com/aws/aws-sdk-go/aws/session"
  "github.com/aws/aws-sdk-go/service/dynamodb"
  "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
  "github.com/google/uuid"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
  var newMovie Request
  err := json.Unmarshal([]byte(request.Body), &newMovie)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
      Message: "Got error marshalling new movie item, " + err.Error(),
     })

    return events.APIGatewayProxyResponse{
      Body:       string(response),
      StatusCode: 500,
    }, nil
  }

  sess := session.Must(session.NewSessionWithOptions(session.Options{
    SharedConfigState: session.SharedConfigEnable,
  }))

  // Create DynamoDB client
  dynamoDbService := dynamodb.New(sess)

  item := Movie{
   ID:     uuid.NewString(),
   Title:  newMovie.Title,
   Genres: newMovie.Genres,
   Rating: newMovie.Rating,
  }
  
  av, err := dynamodbattribute.MarshalMap(item)
  if err != nil {
   response, _ := json.Marshal(ErrorResponse{
    Message: "Got error marshalling new movie item to DynamoAttribute, " + err.Error(),
   })
  
   return events.APIGatewayProxyResponse{
      Body:       string(response),
    StatusCode: 500,
   }, nil
  }

  // Create item in table Movies
  tableName := "Movies"

  input := &dynamodb.PutItemInput{
   Item:      av,
   TableName: aws.String(tableName),
  }

  _, err = dynamoDbService.PutItem(input)
  if err != nil {
   response, _ := json.Marshal(ErrorResponse{
    Message: "Got error calling PutItem, " + err.Error(),
   })
  
   return events.APIGatewayProxyResponse{
    Body:       string(response),
    StatusCode: 500,
   }, nil
  }

  responseData := Response{
   ID:     item.ID,
   Title:  item.Title,
   Genres: item.Genres,
   Rating: item.Rating,
  }

  responseBody, err := json.Marshal(responseData)

  response := events.APIGatewayProxyResponse{
   Body:       string(responseBody),
   StatusCode: 200,
  }

  return response, nil
}

func main() {
  lambda.Start(handleRequest)
}

Deploying the lambda

We now have our lambda and infrastructure ready. It is time to deploy it. In the .github/workflows folder, create a new file named deploy-create-movie-lambda.yml. In it, add the following workflow code to build and deploy our Go lambda:

name: Deploy Create Movie Lambda
on:
 push:
   branches:
     - main
   paths:
     - apps/create-movie/**/*
     - .github/workflows/deploy-create-movie-lambda.yml

defaults:
 run:
   working-directory: apps/create-movie/

jobs:
 terraform:
   name: "Deploy CreateMovie Lambda"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - uses: actions/[email protected]
       with:
         go-version: "1.21.4"

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1 # Set your region here

     - name: Build Lambda
       run: GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o build/ .

       # The lambda requires that the executing file be named "main"
     - name: Rename file
       run: mv ./build/create-movie ./build/main

     - name: Zip build
       run: zip -r -j main.zip ./build

     - name: Update Lambda code
       run: aws lambda update-function-code --function-name=create-movie --zip-file=fileb://main.zip

Don’t forget to change the aws-region to your region. Once you deploy it, you can send POST request to /movies with a similar body:

{
   "title": "Starship Troopers",
   "genres": ["Action", "Sci-Fi"],
   "rating": 7.3
}

And you should get a similar response:

{
   "id": "4e70fef6-d9cc-4056-bf9b-e513cdabc69f",
   "title": "Starship Troopers",
   "rating": 7.3,
   "genres": [
       "Action",
       "Sci-Fi"
   ]
}

Great! We have our endpoints to get a movie and to create a movie. Let’s go to our next step and create one to delete a movie.

Deleting a movie

Now let’s add our module to the iac/lambdas.tf file:

module "delete_movie_lambda" {
 source        = "./modules/lambda"
 name          = "delete-movie"
 runtime       = "nodejs20.x"
 handler       = "index.handler"
}

Let’s also give our lambda permissions to delete items from our table. In the iac/iam-policies.tf add:

data "aws_iam_policy_document" "delete_movie_item" {
 statement {
   effect = "Allow"
  
   actions = [
     "dynamodb:DeleteItem",
   ]

   resources = [
     aws_dynamodb_table.movies-table.arn
   ]
 }
}

resource "aws_iam_policy" "delete_movie_item" {
 name        = "delete_movie_item"
 path        = "/"
 description = "IAM policy allowing DELETE Item on Movies DynamoDB table"
 policy      = data.aws_iam_policy_document.delete_movie_item.json
}

resource "aws_iam_role_policy_attachment" "allow_deleteitem_delete_movie_lambda" {
 role       = module.delete_movie_lambda.role_name
 policy_arn = aws_iam_policy.delete_movie_item.arn
}

We need to add a new method to our iac/rest-api.tf file to link it to our lambda:

module "delete_movie_method" {
 source               = "./modules/rest-api-method"
 api_id               = aws_api_gateway_rest_api.movies_api.id
 http_method          = "DELETE"
 resource_id          = aws_api_gateway_resource.movie_resource.id
 resource_path        = aws_api_gateway_resource.movie_resource.path
 integration_uri      = module.delete_movie_lambda.invoke_arn
 lambda_function_name = module.delete_movie_lambda.name
 region               = var.region
 account_id           = var.account_id
}

And then add the delete_movie_method configuration to our deployment resource in the same file:

resource "aws_api_gateway_deployment" "movies_api_deployment" {
 rest_api_id = aws_api_gateway_rest_api.movies_api.id

 triggers = {
   redeployment = sha1(jsonencode([
     aws_api_gateway_resource.movies_root_resource.id,
     aws_api_gateway_resource.movie_resource.id,
     module.get_movie_method.id,
     module.get_movie_method.integration_id,
     module.create_movie_method.id,
     module.create_movie_method.integration_id,
     module.delete_movie_method.id,
     module.delete_movie_method.integration_id,
   ]))
 }

 lifecycle {
   create_before_destroy = true
 }
}

Now push the code to GitHub and see the lambda, and API be created:

Fig. 17

And the lambda:

Fig. 18

Coding the lambda in Typescript

For our delete movie lambda, I want to show you how easily it is to use Typescript to develop it. Let’s do as before and create a new folder under apps named delete-movie, navigate to it in the terminal and run the following script to initialize our npm project:

npm init -y

Now let’s add Typescript with:

npm i -D typescript

And then add the property type with value module and a new npm script named tsc with the code tsc:

{
 "name": "get-movie",
 "version": "1.0.0",
 "description": "",
 "main": "index.js",
 "type": "module",
 "scripts": {
   "tsc": "tsc",
   "test": "echo \"Error: no test specified\" && exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC",,
 "devDependencies": {
   "typescript": "^5.3.3"
 }
}

Run the following code to start our Typescript project and generate a typescript.json file:

npm run tsc -- --init --target esnext --module nodenext \
--moduleResolution nodenext --rootDir src \
--outDir build --noImplicitAny --noImplicitThis --newLine lf \
--resolveJsonModule

If you are on Windows, run the following:

npm run tsc -- --init --target esnext --module nodenext `
--moduleResolution nodenext --rootDir src `
--outDir build --noImplicitAny --noImplicitThis --newLine lf `
--resolveJsonModule

Now let’s add our dependencies:

npm i -s @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb aws-sdk
npm i -D @types/aws-lambda copyfiles

Great! Now for our lambda implementation code, create a src folder and then an index.ts file with the following code:

import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, DeleteCommand } from "@aws-sdk/lib-dynamodb";
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";

const tableName = "Movies";

export const handler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
 const movieID = event.pathParameters?.movieID;

 if (!movieID) {
   return {
     statusCode: 400,
     body: JSON.stringify({
       message: "Movie ID missing",
     }),
   };
 }

 console.log("Deleting movie with ID ", movieID);

 const client = new DynamoDBClient({});
 const docClient = DynamoDBDocumentClient.from(client);

 const command = new DeleteCommand({
   TableName: tableName,
   Key: {
     ID: movieID.toString(),
   },
 });

 try {
   await docClient.send(command);

   return {
     statusCode: 204,
     body: JSON.stringify({
       message: `Movie ${movieID} deleted`,
     }),
   };
 } catch (e: any) {
   console.log(e);

   return {
     statusCode: 500,
     body: JSON.stringify({
       message: e.message,
     }),
   };
 }
};

Now we just need to add our build npm script and our deploy workflow. For the build script, add a new npm script named build:

{
 "name": "get-movie",
 "version": "1.0.0",
 "description": "",
 "main": "index.js",
 "type": "module",
 "scripts": {
   "tsc": "tsc",
   "build": "tsc && copyfiles package.json build/",
   "test": "echo \"Error: no test specified\" && exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC",
 "dependencies": {
   "@aws-sdk/client-dynamodb": "^3.470.0",
   "@aws-sdk/lib-dynamodb": "^3.470.0",
   "aws-sdk": "^2.1515.0"
 },
 "devDependencies": {
   "@types/aws-lambda": "^8.10.130",
   "copyfiles": "^2.4.1",
   "typescript": "^5.3.3"
 }
}

Again, we copy our package.json file to let our Lambda runtime know about our project configurations. Also, if you need extra packages, you might need to download your node packages before and ship it to your lambda with your main code.

Now for the GitHub actions workflow, create a deploy-delete-movie-lambda.yml file in the .github/workflows folder with the code:

name: Deploy Delete Movie Lambda
on:
 push:
   branches:
     - main
   paths:
     - apps/delete-movie/**/*
     - .github/workflows/deploy-delete-movie-lambda.yml

defaults:
 run:
   working-directory: apps/delete-movie/

jobs:
 terraform:
   name: "Deploy DeleteMovie Lambda"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - name: Setup NodeJS
       uses: actions/setup-node@v4
       with:
         node-version: 20

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1

     - name: Install packages
       run: npm install

     - name: Build
       run: npm run build

     - name: Zip build
       run: zip -r -j main.zip ./build

     - name: Update Lambda code
       run: aws lambda update-function-code --function-name=delete-movie --zip-file=fileb://main.zip

Push your code to GitHub and wait for it to succeed.

Now, as the deployed code is Javascript, you can see it in the delete-movie lambda. You can make a DELETE HTTP request to the /movies/{movieID} URL and receive a 204 status code. Then you can check the Movies table in DynamoDB to see that your record was deleted.

Awesome! Now let’s dive into updating a movie.

Updating a movie

As the code will be similar, let’s also build this one using Go. Let’s add our infrastructure to the iac/lambdas.tf file:

module "update_movie_lambda" {
 source  = "./modules/lambda"
 name    = "update-movie"
 runtime = "go1.x"
 handler = "main"
}

Let’s also give our lambda permissions to update items from our table. In the iac/iam-policies.tf add:

data "aws_iam_policy_document" "update_movie_item" {
 statement {
   effect = "Allow"
  
   actions = [
     "dynamodb:UpdateItem",
   ]

   resources = [
     aws_dynamodb_table.movies-table.arn
   ]
 }
}

resource "aws_iam_policy" "update_movie_item" {
 name        = "update_movie_item"
 path        = "/"
 description = "IAM policy allowing UPDATE Item on Movies DynamoDB table"
 policy      = data.aws_iam_policy_document.update_movie_item.json
}

resource "aws_iam_role_policy_attachment" "allow_updateitem_update_movie_lambda" {
 role       = module.update_movie_lambda.role_name
 policy_arn = aws_iam_policy.update_movie_item.arn
}

We need to add a new method to our iac/rest-api.tf file to link it to our lambda:

module "update_movie_method" {
 source               = "./modules/rest-api-method"
 api_id               = aws_api_gateway_rest_api.movies_api.id
 http_method          = "PUT"
 resource_id          = aws_api_gateway_resource.movie_resource.id
 resource_path        = aws_api_gateway_resource.movie_resource.path
 integration_uri      = module.update_movie_lambda.invoke_arn
 lambda_function_name = module.update_movie_lambda.name
 region               = var.region
 account_id           = var.account_id
}

And then add the update_movie_method configuration to our deployment resource in the same file:

resource "aws_api_gateway_deployment" "movies_api_deployment" {
 rest_api_id = aws_api_gateway_rest_api.movies_api.id
 triggers = {

   redeployment = sha1(jsonencode([
     aws_api_gateway_resource.movies_root_resource.id,
     aws_api_gateway_resource.movie_resource.id,
     module.get_movie_method.id,
     module.get_movie_method.integration_id,
     module.create_movie_method.id,
     module.create_movie_method.integration_id,
     module.delete_movie_method.id,
     module.delete_movie_method.integration_id,
     module.update_movie_method.id,
     module.update_movie_method.integration_id,
   ]))
 }

 lifecycle {
   create_before_destroy = true
 }
}

Now push the code to GitHub and see the lambda, and API be created:

Fig. 19

And the lambdas:

Fig. 20

Now, test the integration by making a PUT HTTP request to /movies/{movieID} and you should get back a 200 status code with:

Hello from Lambda!

Implementing the lambda code

In the folder apps, create a new folder named update-movie. Navigate to the folder and run the following code to initialize a new go module:

go init example-movies.com/update-movie

Then, run the following code to get the necessary packages we’ll require:

go get "github.com/aws/aws-lambda-go"
go get "github.com/aws/aws-sdk-go"

Now, let’s set our models in a models.go file:

package main

type Request struct {
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type ErrorResponse struct {
  Message string `json:"message"`
}

type MovieData struct {
  Title  string   `dynamodbav:":title,string" json:"title"`
  Genres []string `dynamodbav:":genres,stringset"  json:"genres"`
  Rating float64  `dynamodbav:":rating,number"  json:"rating"`
}

Note that we could create a shared module to reuse some of the Go code, but for the sake of simplicity, we are repeating the code here.

And then for our Lambda, a simple straightforward implementation:

package main

import (
  "context"
  "encoding/json"
  "strings"

  "github.com/aws/aws-lambda-go/events"
  "github.com/aws/aws-lambda-go/lambda"
  "github.com/aws/aws-sdk-go/aws"
  "github.com/aws/aws-sdk-go/aws/session"
  "github.com/aws/aws-sdk-go/service/dynamodb"
  "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
  movieID := request.PathParameters["movieID"]

  if strings.TrimSpace(movieID) == "" {
    response, _ := json.Marshal(ErrorResponse{
      Message: "Movie ID invalid",
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 400,
    }, nil
  }

  var updateMovie Request
  err := json.Unmarshal([]byte(request.Body), &updateMovie)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
      Message: "Got error marshalling update movie item, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
      Body:       string(response),
      StatusCode: 500,
    }, nil
  }

  sess := session.Must(session.NewSessionWithOptions(session.Options{
    SharedConfigState: session.SharedConfigEnable,
  }))

  // Create DynamoDB client
  dynamoDbService := dynamodb.New(sess)

  movie := MovieData{
    Title:  updateMovie.Title,
    Genres: updateMovie.Genres,
    Rating: updateMovie.Rating,
  }

  attributeMapping, err := dynamodbattribute.MarshalMap(movie)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
      Message: "Got error marshalling update movie item to DynamoAttribute, " + err.Error(),
    })

  return events.APIGatewayProxyResponse{
    Body:       string(response),
    StatusCode: 500,
   }, nil
  }

  // Create item in table Movies
  tableName := "Movies"

  input := &dynamodb.UpdateItemInput{
    ExpressionAttributeValues: attributeMapping,
    TableName:                 aws.String(tableName),
    Key: map[string]*dynamodb.AttributeValue{
      "ID": {
      S: aws.String(movieID),
     },
    },
    ReturnValues:     aws.String("UPDATED_NEW"),
    UpdateExpression: aws.String("set Rating = :rating, Title = :title, Genres = :genres"),
   }

  _, err = dynamoDbService.UpdateItem(input)
  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Got error calling UpdateItem, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  response := events.APIGatewayProxyResponse{
    StatusCode: 200,
  }

  return response, nil
}

func main() {
  lambda.Start(handleRequest)
}

And now to deploy it, in the .github/workflows, create a new deploy-update-movie-lambda.yml file and add the following code:

name: Deploy Update Movie Lambda
on:
 push:
   branches:
     - main
   paths:
     - apps/update-movie/**/*
     - .github/workflows/deploy-update-movie-lambda.yml

defaults:
 run:
   working-directory: apps/update-movie/

jobs:
 terraform:
   name: "Deploy UpdateMovie Lambda"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - uses: actions/[email protected]
       with:
         go-version: "1.21.4"

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1

     - name: Build Lambda
       run: GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o build/ .

       # The lambda requires that the executing file be named "main"
     - name: Rename file
       run: mv ./build/update-movie ./build/main

     - name: Zip build
       run: zip -r -j main.zip ./build

     - name: Update Lambda code
       run: aws lambda update-function-code --function-name=update-movie --zip-file=fileb://main.zip

Now, push it to GitHub and wait for the workflow to succeed. To test, send a PUT request to /movies/{movieID} with a similar body example below:

{
   "title": "Jurassic Park",
   "rating": 8.2,
   "genres": [
       "Action",
       "Adventure",
       "Sci-Fi",
       "Thriller"
   ]
}

Adding email notification

In this part, we’ll add SNS, SQS, and a Lambda to process changes to our movie database and notify via email.

SNS stands for Simple Notification System. It is an AWS fully-managed service that sends notification messages from publishers to subscribers. SQS stands for Simple Queue Service. It is an AWS fully-managed message queue service where we can send messages that a consumer can asynchronously process. Combining both is useful for implementing a microservices architecture because it allows your systems to communicate asynchronously.

And lastly, we’ll make our lambda be triggered by a new SQS message and send an email through SES.

We will add our SNS topic and SQS queue to our Terraform code. First, in the iac folder, create a new file named messaging.tf and add the following code to generate our SNS and SQS:

resource "aws_sns_topic" "movie_updates" {
 name = "movie-updates-topic"
}

resource "aws_sqs_queue" "movie_updates_queue" {
 name   = "movie-updates-queue"
 policy = data.aws_iam_policy_document.sqs-queue-policy.json
}

resource "aws_sns_topic_subscription" "movie_updates_sqs_target" {
 topic_arn            = aws_sns_topic.movie_updates.arn
 protocol             = "sqs"
 endpoint             = aws_sqs_queue.movie_updates_queue.arn
 raw_message_delivery = true
}

data "aws_iam_policy_document" "sqs-queue-policy" {
 policy_id = "arn:aws:sqs:${var.region}:${var.account_id}:movie-updates-queue/SQSDefaultPolicy"

 statement {
   sid    = "movie_updates-sns-topic"
   effect = "Allow"

   principals {
     type        = "Service"
     identifiers = ["sns.amazonaws.com"]
   }

   actions = [
     "SQS:SendMessage",
   ]

   resources = [
     "arn:aws:sqs:${var.region}:${var.account_id}:movie-updates-queue",
   ]

   condition {
     test     = "ArnEquals"
     variable = "aws:SourceArn"

     values = [
       aws_sns_topic.movie_updates.arn,
     ]
   }
 }
}

We chose the fan-out architecture over directly publishing a message to SQS because this allows us to easily expand our microservices architecture in case we need more services to be notified about any message coming from this SNS topic. Because SNS broadcasts notification events to all subscriptions, it enables easy expansion.

Fig. 21: SNS fan-out pattern (source: AWS)

Now, run the GitHub workflow to create our queue and topic.

Publishing events to the SNS topic

To allow our lambdas to publish the events to SNS, we first need to give them access through IAM policies. To do that, add the following code to the iam-policies.tf file in the iac folder:

data "aws_iam_policy_document" "publish_to_movies_updates_sns_topic" {
 statement {
   effect = "Allow"

   actions = [
     "sns:Publish",
   ]

   resources = [
     aws_sns_topic.movie_updates.arn
   ]
 }
}

resource "aws_iam_policy" "publish_to_movies_updates_sns_topic" {
 name        = "publish_to_movies_updates_sns_topic"
 path        = "/"
 description = "IAM policy allowing to PUBLISH events to ${aws_sns_topic.movie_updates.name}"
 policy      = data.aws_iam_policy_document.publish_to_movies_updates_sns_topic.json
}

resource "aws_iam_role_policy_attachment" "allow_publish_to_movies_update_sns_create_movie_lambda" {
 role       = module.create_movie_lambda.role_name
 policy_arn = aws_iam_policy.publish_to_movies_updates_sns_topic.arn
}

resource "aws_iam_role_policy_attachment" "allow_publish_to_movies_update_sns_delete_movie_lambda" {
 role       = module.delete_movie_lambda.role_name
 policy_arn = aws_iam_policy.publish_to_movies_updates_sns_topic.arn
}

resource "aws_iam_role_policy_attachment" "allow_publish_to_movies_update_sns_update_movie_lambda" {
 role       = module.update_movie_lambda.role_name
 policy_arn = aws_iam_policy.publish_to_movies_updates_sns_topic.arn
}

This will allow them to perform the Publish action in our SNS topic, which will broadcast the event and be picked up by our SQS queue.

Publishing events

Now, to code, let’s start with our create-movie lambda. It will send an event every time we add a new movie.

MovieCreated event

Now, go to the apps/create-movie folder. In the models.go file, let’s add a struct that will represent our event.

package main

type Request struct {
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type Response struct {
  ID     string   `json:"id"`
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type ErrorResponse struct {
  Message string `json:"message"`
}

type Movie struct {
  ID     string   `dynamodbav:",string"`
  Title  string   `dynamodbav:",string"`
  Genres []string `dynamodbav:",stringset,omitemptyelem"`
  Rating float64  `dynamodbav:",number"`
}

type MovieCreated struct {
  ID     string   `json:"id"`
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

func (event *MovieCreated) getEventName() string {
  return "MovieCreated"
}

Ideally, you’ll want to have these events in a shared package so consumers can use them. Now, let’s edit our main.go file to publish the event every time we create a new movie:

package main

import (
  "context"
  "encoding/json"
  "fmt"

  "github.com/aws/aws-lambda-go/events"
  "github.com/aws/aws-lambda-go/lambda"
  "github.com/aws/aws-sdk-go/aws"
  "github.com/aws/aws-sdk-go/aws/session"
  "github.com/aws/aws-sdk-go/service/dynamodb"
  "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
  "github.com/aws/aws-sdk-go/service/sns"
  "github.com/google/uuid"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
  var newMovie Request
  err := json.Unmarshal([]byte(request.Body), &newMovie)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Got error marshalling new movie item, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  sess := session.Must(session.NewSessionWithOptions(session.Options{
    SharedConfigState: session.SharedConfigEnable,
  }))

  // Create DynamoDB client
  dynamoDbService := dynamodb.New(sess)

  item := Movie{
    ID:     uuid.NewString(),
    Title:  newMovie.Title,
    Genres: newMovie.Genres,
    Rating: newMovie.Rating,
  }

  av, err := dynamodbattribute.MarshalMap(item)
  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Got error marshalling new movie item to DynamoAttribute, " + err.Error(),
   })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  // Create item in table Movies
  tableName := "Movies"

  input := &dynamodb.PutItemInput{
    Item:      av,
    TableName: aws.String(tableName),
  }

  _, err = dynamoDbService.PutItem(input)
  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Got error calling PutItem, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  publishEventToSNS(sess, item)

  responseData := Response{
    ID:     item.ID,
    Title:  item.Title,
    Genres: item.Genres,
    Rating: item.Rating,
  }

  responseBody, err := json.Marshal(responseData)

  response := events.APIGatewayProxyResponse{
    Body:       string(responseBody),
    StatusCode: 200,
  }

  return response, nil
}

func publishEventToSNS(sess *session.Session, item Movie) {
  snsService := sns.New(sess)

  movieCreatedEvent := MovieCreated{
    ID:     item.ID,
    Title:  item.Title,
    Rating: item.Rating,
    Genres: item.Genres,
  }

  eventJSON, err := json.Marshal(movieCreatedEvent)

  _, err = snsService.Publish(&sns.PublishInput{
    Message: aws.String(string(eventJSON)),
    MessageAttributes: map[string]*sns.MessageAttributeValue{
     "Type": {
      DataType:    aws.String("String"),
      StringValue: aws.String(movieCreatedEvent.getEventName()),
     },
    },
    TopicArn: aws.String("YOUR_SNS_TOPIC_ARN"), // Add your topic ARN here
   })

  if err != nil {
    fmt.Println(err.Error())
  }
}

func main() {
  lambda.Start(handleRequest)
}

Don’t forget to change the YOUR_SNS_TOPIC_ARN to the topic ARN that was created in the previous section through Terraform. To test it, you can create a new movie through the POST /movies endpoint, go to the SQS queue and poll for messages to see it there:

Fig. 22

When you click on it, you can see the body:

Fig. 23

And the attributes:

Fig. 24

MovieDeleted event

Now, let’s move to send a deleted event through our delete-movie lambda. Go to the apps/delete-movie folder and then run the following npm command to add the SNS library:

npm i -s @aws-sdk/client-sns

Now, create a new models.ts file in the src folder to add our event type:

export type MovieDeleted = {
 id: string;
};

And now, let’s publish the message to SNS in the index.ts file:

import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, DeleteCommand } from "@aws-sdk/lib-dynamodb";
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
import { PublishCommand, SNSClient } from "@aws-sdk/client-sns";
import { MovieDeleted } from "./models.js";

const tableName = "Movies";

export const handler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
 const movieID = event.pathParameters?.movieID;

 if (!movieID) {
   return {
     statusCode: 400,
     body: JSON.stringify({
       message: "Movie ID missing",
     }),
   };
 }

 console.log("Deleting movie with ID ", movieID);

 const client = new DynamoDBClient({});
 const docClient = DynamoDBDocumentClient.from(client);

 const command = new DeleteCommand({
   TableName: tableName,
   Key: {
     ID: movieID.toString(),
   },
 });

 try {
   await docClient.send(command);

   await publishEventToSNS(movieID);

   return {
     statusCode: 204,
     body: JSON.stringify({
       message: `Movie ${movieID} deleted`,
     }),
   };
 } catch (e: any) {
   console.log(e);

   return {
     statusCode: 500,
     body: JSON.stringify({
       message: e.message,
     }),
   };
 }
};

async function publishEventToSNS(movieID: string) {
 const snsClient = new SNSClient({});

 const event: MovieDeleted = {
   id: movieID,
 };

 const eventName = "MovieDeleted";

 try {
   await snsClient.send(
     new PublishCommand({
       Message: JSON.stringify(event),
       TopicArn: "YOUR_SNS_TOPIC_ARN", // Add your SNS topic ARN here
       MessageAttributes: {
         Type: {
           DataType: "String",
           StringValue: eventName,
         },
       },
     })
   );
 } catch (e: any) {
   console.warn(e);
 }
}

Don’t forget to change YOUR_SNS_TOPIC_ARN value to your actual SNS topic ARN. Now, push it to GitHub, wait for the action to succeed, and then delete an existing movie through the PUT /movies/{movieID} endpoint and check SQS for the message in the queue.

MovieUpdated event

And now for our last lambda, the update-movie lambda. Go to apps/update-movie folder and modify the models.go to add the MovieUpdated event:

package main

type Request struct {
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

type ErrorResponse struct {
  Message string `json:"message"`
}

type MovieData struct {
  Title  string   `dynamodbav:":title,string" json:"title"`
  Genres []string `dynamodbav:":genres,stringset,omitemptyelem"  json:"genres"`
  Rating float64  `dynamodbav:":rating,number"  json:"rating"`
}

type Movie struct {
  ID     string   `json:"id"`
  Title  string   `json:"title"`
  Genres []string `json:"genres"`
  Rating float64  `json:"rating"`
}

type MovieUpdated struct {
  ID     string   `json:"id"`
  Title  string   `json:"title"`
  Rating float64  `json:"rating"`
  Genres []string `json:"genres"`
}

func (event *MovieUpdated) getEventName() string {
  return "MovieUpdated"
}

And now, to add the code to the main.go file:

package main

import (
  "context"
  "encoding/json"
  "fmt"
  "strings"

  "github.com/aws/aws-lambda-go/events"
  "github.com/aws/aws-lambda-go/lambda"
  "github.com/aws/aws-sdk-go/aws"
  "github.com/aws/aws-sdk-go/aws/session"
  "github.com/aws/aws-sdk-go/service/dynamodb"
  "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
  "github.com/aws/aws-sdk-go/service/sns"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
  movieID := request.PathParameters["movieID"]

  if strings.TrimSpace(movieID) == "" {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Movie ID invalid",
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 400,
    }, nil
  }

  var updateMovie Request
  err := json.Unmarshal([]byte(request.Body), &updateMovie)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
     Message: "Got error marshalling update movie item, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  sess := session.Must(session.NewSessionWithOptions(session.Options{
    SharedConfigState: session.SharedConfigEnable,
   }))

  // Create DynamoDB client
  dynamoDbService := dynamodb.New(sess)

  movieData := MovieData{
    Title:  updateMovie.Title,
    Genres: updateMovie.Genres,
    Rating: updateMovie.Rating,
  }

  attributeMapping, err := dynamodbattribute.MarshalMap(movieData)

  if err != nil {
    response, _ := json.Marshal(ErrorResponse{
      Message: "Got error marshalling update movie item to DynamoAttribute, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(response),
     StatusCode: 500,
    }, nil
  }

  // Create item in table Movies
  tableName := "Movies"

  input := &dynamodb.UpdateItemInput{
    ExpressionAttributeValues: attributeMapping,
    TableName:                 aws.String(tableName),
    Key: map[string]*dynamodb.AttributeValue{
     "ID": {
      S: aws.String(movieID),
     },
    },
    ReturnValues:     aws.String("ALL_NEW"),
    UpdateExpression: aws.String("set Rating = :rating, Title = :title, Genres = :genres"),
  }

  updateResponse, err := dynamoDbService.UpdateItem(input)

  if err != nil {
    errorResponse, _ := json.Marshal(ErrorResponse{
     Message: "Got error calling UpdateItem, " + err.Error(),
    })

    return events.APIGatewayProxyResponse{
     Body:       string(errorResponse),
     StatusCode: 500,
    }, nil
  }

  var movie Movie
  err = dynamodbattribute.UnmarshalMap(updateResponse.Attributes, &movie)

  publishEventToSNS(sess, movie)

  response := events.APIGatewayProxyResponse{
    StatusCode: 200,
  }

  return response, nil
}

func publishEventToSNS(sess *session.Session, item Movie) {
  snsService := sns.New(sess)

  movieUpdatedEvent := MovieUpdated{
    ID:     item.ID,
    Title:  item.Title,
    Rating: item.Rating,
    Genres: item.Genres,
  }

  eventJSON, err := json.Marshal(movieUpdatedEvent)

  _, err = snsService.Publish(&sns.PublishInput{
    Message: aws.String(string(eventJSON)),
    MessageAttributes: map[string]*sns.MessageAttributeValue{
     "Type": {
       DataType:    aws.String("String"),
       StringValue: aws.String(movieUpdatedEvent.getEventName()),
     },
    },
    TopicArn: aws.String("YOUR_SNS_TOPIC_ARN"),
  })

  if err != nil {
    fmt.Println(err.Error())
  }
}

func main() {
  lambda.Start(handleRequest)
}

Don’t forget to change YOUR_SNS_TOPIC_ARN value to your actual SNS topic ARN. Now, push the code to GitHub, wait for 0the workflow to succeed, and test it by updating a movie and checking back on SQS for the MovieUpdated event.

Processing the SQS messages

Now, let’s build our lambda for processing our event messages in SQS. Ideally, we’d create one lambda to be responsible for each event type, but for simplicity, we’ll create a generic one to handle all three.

Let’s start by adding a new NodeJS lambda with a SQS trigger to our iac/lambdas.tf file:

module "process_movie_update_events_lambda" {
 source  = "./modules/lambda"
 name    = "process-movie-update-events"
 runtime = "nodejs20.x"
 handler = "index.handler"
}

resource "aws_lambda_event_source_mapping" "movie_update_events_trigger" {
 event_source_arn = aws_sqs_queue.movie_updates_queue.arn
 function_name    = module.process_movie_update_events_lambda.arn
 enabled          = true
}

If you’d like to set filter_criteria, please note that the Lambda Event Filter deletes messages from the Queue when they don’t match the filter criteria. This means the message won’t be available to be polled in the SQS queue anymore if they don’t match the filter criteria.

It is very important to note that lambda trigger filters. We also need to add permissions to this lambda to pull messages from our SQS queue. In the iam-policies.tf add:

data "aws_iam_policy_document" "pull_message_from_sqs" {
 statement {
   effect = "Allow"

   actions = [
     "sqs:ReceiveMessage",
     "sqs:DeleteMessage",
     "sqs:GetQueueAttributes"
   ]

   resources = [
     aws_sqs_queue.movie_updates_queue.arn
   ]
 }
}

Note that if your SQS queue is encrypted with kms, you’ll need to add the kms:Decrypt permission to the policy.

Now, push the code to GitHub and wait for the workflow to succeed in creating our lambda and trigger. You can check if it worked by going to the lambda and seeing the trigger attached to it:

Fig. 25

Let’s code our Lambda. In the apps folder, create a new folder named process-movie-update-events and let’s initialize a Typescript project with:

npm init -y
npm i -s typescript

Inside the package.json add the tsc script:

{
 "name": "process-movie-update-events",
 "version": "1.0.0",
 "description": "",
 "main": "index.js",
 "scripts": {
   "tsc": "tsc",
   "test": "echo \"Error: no test specified\" && exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC",
 "dependencies": {
   "typescript": "^5.3.3"
 }
}

Now run the following command to initialize your TypeScript project:

npm run tsc -- --init --target esnext --module nodenext `
--moduleResolution nodenext --rootDir src `
--outDir build --noImplicitAny --noImplicitThis --newLine lf `
--resolveJsonModule

Create a new folder named src and a file named index.ts. In the index.ts add the following code:

import { SQSEvent, Context, SQSHandler, SQSRecord } from "aws-lambda";

export const handler: SQSHandler = async (event: SQSEvent, context: Context): Promise<void> => {
 for (const message of event.Records) {
   await processMessageAsync(message);
 }
 console.info("done");
};

async function processMessageAsync(message: SQSRecord): Promise<any> {
 try {
   console.log(`Processed ${message.messageAttributes["Type"].stringValue} message ${message.body}`);
   // TODO: Do interesting work based on the new message
   await Promise.resolve(1); //Placeholder for actual async work
 } catch (err) {
   console.error("An error occurred");
   throw err;
 }
}

This code will be triggered every time a new SQS message is added to our movie-updates-queue. We now need to enable SES and send an email through our lambda. To do so, create a new file named email.tf in the iac folder. There, add the following code:

# The email here will receive a verification email
# To set it as verified in SES
resource "aws_ses_email_identity" "email_identity" {
 email = "YOUR_EMAIL"
}

# Rules to monitor your SES email sending activity, you can create configuration sets and output them in Terraform.
# Event destinations
# IP pool managemen
resource "aws_ses_configuration_set" "configuration_set" {
 name = "movies-configuration-set"
}

Change the YOUR_EMAIL placeholder for the email you’d like to be the identity that the lambda will use as the source of the email.

If you already have a domain, you can use the aws_ses_domain_identity resource instead, but the verification steps are different. If you have already it registered in Route 53. You can use Terraform to automatically verify it for you with the aws_route53_record resource.

The configuration_set are groups of rules that you can apply to your verified identities. Once you push it to GitHub and the workflow succeeds, the email provided in the Terraform resource will receive an email from AWS with a link to verify its identity. Click on the link to verify and enable it.

Now, we can go back to our email-notification lambda and finalize our code. Create a models.ts file in the email-notification folder a and add the following code:

export type MovieCreated = {
 id: string;
 title: string;
 rating: number;
 genres: string[];
};

export type MovieDeleted = {
 id: string;
};

export type MovieUpdated = {
 id: string;
 title: string;
 rating: number;
 genres: string[];
};

export const MovieCreatedEventType = "MovieCreated";
export const MovieDeletedEventType = "MovieDeleted";
export const MovieUpdatedEventType = "MovieUpdated";

Let’s adapt our index.ts file with the SES code:

import { SQSEvent, Context, SQSHandler, SQSRecord } from "aws-lambda";
import { SESClient, SendEmailCommand } from "@aws-sdk/client-ses";
import { MovieCreated, MovieCreatedEventType, MovieDeleted, MovieDeletedEventType, MovieUpdated, MovieUpdatedEventType } from "./models.js";

export const handler: SQSHandler = async (event: SQSEvent, context: Context): Promise<void> => {
 const client = new SESClient({});
 const promises: Promise<void>[] = [];
 for (const message of event.Records) {
   promises.push(processMessageAsync(message, client));
 }

 await Promise.all(promises);

 console.info("done");
};

async function processMessageAsync(message: SQSRecord, client: SESClient): Promise<void> {
 try {
   const eventType = message.messageAttributes["Type"].stringValue ?? "MovieEvent";
   console.log(`Processing ${eventType} message ${message.body}`);

   await sendEmail(message, eventType, client);

   console.log(`Processed ${eventType} message ${message.body}`);
 } catch (err) {
   console.error("An error occurred");
   console.error(err);
 }
}

async function sendEmail(message: SQSRecord, eventType: string, client: SESClient) {
 const [subject, body] = buildSubjectAndBody(message.body, eventType);

 const sourceEmail = "YOUR_SOURCE_EMAIL"; // Ideally it needs to be validated and logged if not set
 const destinationEmail = "YOUR_DESTINATION_EMAIL"; // Ideally it needs to be validated and logged if not set

 const command = new SendEmailCommand({
   Source: sourceEmail,
   Destination: {
     ToAddresses: [destinationEmail],
   },
   Message: {
     Body: {
       Text: {
         Charset: "UTF-8",
         Data: body,
       },
     },
     Subject: {
       Charset: "UTF-8",
       Data: subject,
     },
   },
 });

 await client.send(command);
}

function buildSubjectAndBody(messageBody: string, eventType: string): [string, string] {
 let subject = "";
 let body = "";
 const messageJsonBody = JSON.parse(messageBody);
 switch (eventType) {
   case MovieCreatedEventType:
     const movieCreatedEvent = <MovieCreated>messageJsonBody;

     subject = "New Movie Added: " + movieCreatedEvent.title;
     body = "A new movie was added!\n" +
       "ID: " + movieCreatedEvent.id + "\n" +
       "Title: " + movieCreatedEvent.title + "\n" +
       "Rating: " + movieCreatedEvent.rating + "\n" +
       "Genres: " + movieCreatedEvent.genres;

     break;

   case MovieDeletedEventType:
     const movieDeletedEvent = <MovieDeleted>messageJsonBody;

     subject = "Movie Deleted. ID: " + movieDeletedEvent.id;
     body = "A movie was updated!\n" +
       "ID: " + movieDeletedEvent.id;

     break;

   case MovieUpdatedEventType:
     const movieUpdatedEvent = <MovieUpdated>messageJsonBody;

     subject = "Movie Updated: " + movieUpdatedEvent.title;
     body = "A movie was updated!\n" +
       "ID: " + movieUpdatedEvent.id + "\n" +
       "Title: " + movieUpdatedEvent.title + "\n" +
       "Rating: " + movieUpdatedEvent.rating + "\n" +
       "Genres: " + movieUpdatedEvent.genres;

     break;

   default:
     throw new Error("An unknown movie event was received");
 }

 return [subject, body];
}

Don’t forget to change YOUR_SOURCE_EMAIL to the email set in SES, and YOUR_DESTINATION_EMAIL to the email you’d like to receive these event messages. We are just left to add a workflow to deploy this lambda. So, let’s add a deploy-email-notification-lambda.yml file in the .github/workflows folder and add the following code to build and deploy your lambda:

name: Deploy Email Notification Lambda
on:
 push:
   branches:
     - main
   paths:
     - apps/email-notification/**/*
     - .github/workflows/deploy-email-notification-lambda.yml

defaults:
 run:
   working-directory: apps/email-notification/

jobs:
 terraform:
   name: "Deploy Email Notification Lambda"
   runs-on: ubuntu-latest
   steps:
     # Checkout the repository to the GitHub Actions runner
     - name: Checkout
       uses: actions/checkout@v3

     - name: Setup NodeJS
       uses: actions/setup-node@v4
       with:
         node-version: 20

     - name: Configure AWS Credentials Action For GitHub Actions
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: eu-central-1

     - name: Install packages
       run: npm install

     - name: Build
       run: npm run build

     - name: Zip build
       run: zip -r -j main.zip ./build

     - name: Update Lambda code
       run: aws lambda update-function-code --function-name=email-notification --zip-file=fileb://main.zip

Amazing! Now, push your code to GitHub and wait for your lambda deployment. You can create, update, and delete a movie to test it. An email should be received in the destination email with the body and subject set by you in the lambda.

Conclusion

This journey provided a practical guide to constructing serverless APIs and highlighted the simplicity and scalability achieved by combining AWS services and infrastructure as code (IAC) with Terraform.

We’ve successfully built a serverless CRUD API with NodeJS and Go on AWS using Terraform. We started by setting up the foundational components, including an API Gateway, Lambda functions, and a DynamoDB table.

The flexibility of AWS Lambda allowed us to showcase the ease of working with different programming languages, demonstrating implementations in both Go and TypeScript. Each Lambda function was equipped with the necessary permissions to interact with the DynamoDB table, showcasing the power of AWS Identity and Access Management (IAM) for secure resource access.

Additionally, we automated the deployment process using GitHub Actions, ensuring a seamless integration of code changes into the AWS environment. By following these steps, readers can replicate and extend the project, gaining insights into building robust serverless APIs with diverse language support.

Not only that, but you could also learn how to set up and send messages to SNS and SQS with a fanout pattern to trigger other lambdas.

And last, we’ve also managed to use SES (Simple Email Service) to send emails by triggering a lambda through a SQS event.

I hope you enjoyed this article as much as I enjoyed writing it! The code for this project can be found here.

Happy coding! 💻

Frequently Asked Questions

What AWS services are used in this serverless CRUD API project?

This project uses AWS API Gateway, Lambda (Node.js, Go, TypeScript), DynamoDB, SNS, SQS, SES, and S3 for remote Terraform state storage, orchestrated using Terraform and deployed via GitHub Actions.

Why use Terraform and GitHub Actions for serverless deployment?

Terraform defines and manages infrastructure as code, giving consistency and modifiability. GitHub Actions automates continuous deployment—both infrastructure and lambda code are applied on push to main, enabling scalable and maintainable workflows.

How is the “get movie by ID” Lambda structured?

A Terraform module creates the Node.js 20.x Lambda function with IAM role, policy for DynamoDB access, and API Gateway GET method integration. The Lambda fetches a DynamoDB item by the provided movieID and returns a structured JSON response.

How are API Gateway methods modularized?

A Terraform module (rest‑api‑method) handles creation of API methods, Lambda proxy integrations, and permissions. Modules are reused for GET, POST, PUT, DELETE endpoints under /movies and /movies/{movieID}.

How does the blog implement CRUD operations in serverless?

  • GET fetches a movie by ID (Node.js).
  • POST creates a movie (Go).
  • DELETE deletes a movie (TypeScript).
  • PUT updates a movie (Go).

Each Lambda is integrated via API Gateway to DynamoDB backend and deployed with GitHub Actions.

How are cross-service events handled for notifications?

  • Lambdas publish events to SNS whenever a movie is created, updated, or deleted.
  • SNS fans out to an SQS queue.
  • A Node.js Lambda processes messages from SQS and sends emails via SES for event notifications.

How are Lambda functions deployed and updated?

Each Lambda has a dedicated GitHub Actions workflow. Node.js lambdas zip and update code; Go lambdas are built for Linux and deployed similarly. Infrastructure changes trigger Terraform workflows on iac folder updates.

How can I avoid unexpected AWS billing during this experiment?

All used services fall under the AWS free tier or have minimal cost. To be safe, set an AWS budget (e.g., $0.01) to receive alerts if usage incurs charges.

The post Building a Serverless API With Email Notifications in AWS with Terraform appeared first on Serverless Architecture Conference.

]]>
AWS Serverless: Lambda Scalability & Zero-Cost Architectures https://serverless-architecture.io/blog/serverless-architecture-design/aws-serverless-previews-vadym-dan/ Wed, 13 Aug 2025 13:23:58 +0000 https://serverless-architecture.io/?p=89268 ⚡ Vadym Kazulkin and Dan Erez take the stage in October at the Serverless Conference Berlin. Here’s a 30-minute sneak peek to get you started.

The post AWS Serverless: Lambda Scalability & Zero-Cost Architectures appeared first on Serverless Architecture Conference.

]]>

 

Serverless Scalability: Myths, Quotas & Smart Architecture

Vadym Kazulkin kicked off with a reality check: AWS Serverless services don’t scale without limits.

  • Key quotas for API Gateway, Lambda, DynamoDB, Aurora Serverless, SQS, S3 and more
  • Architectural choices between DynamoDB vs. Aurora
  • Event-driven design with SQS, SNS, Kinesis, EventBridge
  • Practical concepts like token bucket algorithms, concurrency handling, and exponential backoff

View Vadym’s Full Session Details

 

Zero-Cost Serverless? Yes, Really.

Dan Erez showed how you can use your existing infrastructure — Kubernetes, open-source serverless platforms, and even browsers — to run workloads for free, at scale.

  • How to spot underused resources in your org
  • The open-source stack to make them serverless-ready
  • When it’s worth running in-house vs. public cloud

View Dan’s Full Session Details

 

📅 Join Us in Berlin – October 20–23, 2025

Until September 18th, you can save up to €285 on your tickets!

Register Now

FREQUENTLY ASKED QUESTIONS

What are the scalability limitations of AWS Serverless services?

AWS Serverless services do not scale infinitely. Service-specific quotas apply to Lambda, API Gateway, DynamoDB, Aurora Serverless, and SQS, and must be accounted for in architecture design.

How does AWS handle concurrency in Serverless applications?

Concurrency in AWS Lambda is managed through regional and function-level limits. Patterns like token bucket algorithms and exponential backoff help mitigate overload scenarios.

What are the architectural trade-offs between DynamoDB and Aurora Serverless?

DynamoDB offers low-latency and high scalability, while Aurora Serverless provides relational features and consistency. The choice depends on workload requirements.

Which AWS services are key in event-driven architectures?

AWS services like SQS, SNS, Kinesis, and EventBridge are essential for event-driven design, enabling decoupled, scalable microservices communication.

Can you really run serverless workloads at zero cost?

Yes. By using existing infrastructure like Kubernetes clusters or browser runtimes and open-source tools, zero-cost serverless workloads are achievable in some cases.

The post AWS Serverless: Lambda Scalability & Zero-Cost Architectures appeared first on Serverless Architecture Conference.

]]>
DevOps and Serverless – Friends or Foes? https://serverless-architecture.io/blog/serverless-architecture-design/devops-and-serverless-friends-or-foes/ Tue, 08 Jul 2025 12:09:50 +0000 https://serverless-architecture.io/?p=89208 Are DevOps and Serverless competing or collaborating? Dive into the highlights from last year’s packed panel where experts debated whether serverless makes DevOps obsolete.

The post DevOps and Serverless – Friends or Foes? appeared first on Serverless Architecture Conference.

]]>

When the worlds of DevOps and Serverless collide, are they really competing — or secretly on the same team?

🚀 Serverless: The Developer’s Shortcut

Serverless is loved by developers for good reason — it keeps things fast and simple. Teams can skip the tedious parts of infrastructure and jump straight into building products. But here’s the catch: the Ops work doesn’t vanish — it just moves behind the scenes to cloud providers, managed services, and automated pipelines.

🛠 DevOps: Still Doing the Heavy Lifting

DevOps is far from obsolete. Serverless runs because strong DevOps practices run underneath it — monitoring, security, reliability, and automation keep the lights on. While developers might not see it, someone’s making sure your serverless code stays fast, safe, and scalable.

⚙ When Should You Build It Yourself?

Of course, it’s not always one-size-fits-all. Sometimes building your own Kubernetes stack makes sense — if your team needs full control, has strict compliance rules, or needs deep security visibility. But for most, managed serverless wins on speed and time-to-market.

Don’t pick your tech stack because it’s trendy. Context matters more than buzzwords. Yes, vendor lock-in is a risk — but so is data gravity (once your data’s in, moving it is hard). Staying flexible always comes with a price.

In the end, the panel’s biggest takeaway was clear: Serverless and DevOps aren’t enemies — they’re partners.

And guess what?

Michael Dowden, who led this lively session last year, is back at the conference again this year — ready to push this conversation forward. Don’t miss it!

 

Explore Program from SLA Berlin Conference 2025

The post DevOps and Serverless – Friends or Foes? appeared first on Serverless Architecture Conference.

]]>
Breaking Down Silos with InnerSource: A Keynote on Open Collaboration https://serverless-architecture.io/blog/innersource-open-collaboration-github-copilot-codespaces/ Mon, 07 Jul 2025 13:11:16 +0000 https://serverless-architecture.io/?p=89185 Ready to open up your codebase without opening it to the whole world? Watch this keynote and learn how InnerSource, GitHub Copilot, Codespaces, and Actions help teams tear down silos, share ideas, and build better software together.

The post Breaking Down Silos with InnerSource: A Keynote on Open Collaboration appeared first on Serverless Architecture Conference.

]]>
When was the last time you peeked into another team’s codebase? For many organisations, the answer is “never” — not because they don’t want to, but because silos and scattered workflows get in the way.

In this inspiring keynote from one of our previous conferences, you’ll discover how InnerSource can change that — and why this level of practical, real-world insight is exactly what you can expect at our upcoming SLA BERLIN 2025 conference too.

InnerSource takes the best ideas from open source — like transparent collaboration, clear contribution standards, and shared ownership — and brings them inside your company’s walls.
Instead of reinventing the wheel behind closed doors, teams can open up their projects to each other, share ideas, and build better software together.

In this session, you’ll learn:

  • What InnerSource really means (hint: it’s not just open source with a new name)
  • Why breaking down silos boosts innovation and reclaims wasted effort
  • How tools like GitHub Copilot, Codespaces, and GitHub Actions make contributing smoother and faster
  • Real-life examples — including how Visual Studio Code itself began as an InnerSource project

If you’re curious about how to open up your codebase, work more transparently, and build a culture of trust and contribution — this video is for you.
Grab a coffee (or that beer) and press play.


Explore Program from SLA Berlin Conference 2025

The post Breaking Down Silos with InnerSource: A Keynote on Open Collaboration appeared first on Serverless Architecture Conference.

]]>
Running Spring Boot 3 on AWS Lambda https://serverless-architecture.io/blog/spring-boot-3-on-aws-lambda-serverless-java-container-optimization/ Mon, 23 Sep 2024 11:25:20 +0000 https://serverless-architecture.io/?p=88955 We’ll demonstrate several ways how Spring Boot 3 applications can be implemented, optimised, and operated on AWS Lambda with Java 21, based on different frameworks or tools. This longer series kicks off by looking at AWS Serverless Java Containers.

The post Running Spring Boot 3 on AWS Lambda appeared first on Serverless Architecture Conference.

]]>
Let’s have a look at the different ways to run Spring Boot 3 applications on AWS Lambda using the following frameworks, technologies, and tools:

  • AWS Serverless Java Container
  • AWS Lambda Web Adapter
  • Spring Cloud Function
  • Customised Docker Image

 

First things first, we’ll introduce the concept behind it and learn how to develop, deploy, and operate our application with the respective approach. We’ll also look at GraalVM Native Image using Spring Cloud Function as an option deployed as an AWS Lambda Custom Runtime. Last but not least, we will investigate if native support for Coordinated Restore at Checkpoint (CRaC) in Spring Boot 3 is also a valid approach.

 

Of course, we’ll measure the Lambda function’s cold and warm start times with all of the aforementioned approaches and evaluate the solutions. We’ll also see how we can optimise cold starts for the Lambda functions with SnapStart (including various priming techniques) – if it is a feasible solution for the respective approach (currently not available for the Docker container images). Code examples for the entire series are available on my GitHub account.

 

Introduction

The AWS Serverless Java Container facilitates the execution of Java applications written with frameworks like Spring, Spring Boot 2 and 3, or JAX-RS/Jersey in Lambda. The container offers adapter logic to minimise code changes. Incoming events are translated into the servlet specification so that the frameworks can be used as before (Fig. 1)

 

AWS Serverless Java Container architecture

Fig. 1: AWS Serverless Java Container architecture

AWS Serverless Java Container provides the core container and framework-specific containers such as the one for Spring Boot 3, which is the focus of this article. There are also other containers for the Spring, Struts, and Jersey frameworks. A major update to version 2.0 was recently released for all AWS Serverless Java Containers. The dependency tree denotes another dependency spring-cloud-function-serverless-web, which needs the artefact aws-serverless-java-container-springboot3. This is because of the collaboration between Spring and AWS serverless developers and offers Spring Cloud Function on AWS Lambda functionality. (The possibilities of this will be discussed in an upcoming entry to this AWS Lambda series.)

AWS Serverless Java Core Container also provides abstractions such as AWSProxyRequest/Response for mapping API gateway (REST) requests to the servlet model including various authorisers such as Amazon Cognito and HttpApiV2JwtAuthorizer. In the core container, everything is passed through the AwsHttpServletRequest/Response abstractions or their derivatives like AwsProxyHttpServletRequest.

My personal preference is that a subset of abstractions from the Java package com.amazonaws.serverless.proxy.model, such as

  • AwsProxyRequest
  • ApiGatewayRequestIdentity
  • AwsProxyRequestContext
  • AwsProxyResponse

and others become part of a separate project and can be used without using all the other AWS Serverless Java Container APIs just for mocking the API Gateway Request/Response (i.e. priming). In an upcoming entry, we’ll directly use these abstractions when we look at cold and warm start time improvements for Spring Boot 3 applications on AWS Lambda using AWS Lambda SnapStart together with priming techniques. An introduction to AWS Lambda SnapStart can be found in Java Magazin 10.2023.

The Lambda Runtime has to know which handler method will be called. The AWS Serverless Spring Boot 3 container, which internally uses the AWS Serverless Java Core container, simply adds some implementations, such as SpringDelegatingLambdaContainerHandler. We can also implement our own Java handler class that delegates to the AWS Serverless Java container. This is useful if we want to implement extra functions like the Lambda SnapStart priming technique. The SpringBootLambdaContainerHandler abstraction (which inherits the AwsLambdaServletContainerHandler class from the core container) can be used by passing the main Spring Boot class annotated with @SpringBootApplication as input. For Spring Boot 3 applications that take longer than ten seconds to start, there is an asynchronous way to create SpringBootLambdaContainerHandlers by using the SpringBootProxyHandlerBuilder abstraction. Since version 2.0.0’s release, it always runs asynchronously by default. Previously, we had to call the asyncInit method (which is now deprecated) to initialise SpringBootProxyHandlerBuilder asynchronously. I’ll explain this in more detail later using code examples.

 

STAY TUNED!

Learn more about Serverless Architecture Conference

 

Developing the application

To explain this, we use our Spring Boot 3 example application and the Java 21 runtime for our lambda functions (Fig. 2).

 

Fig. 2: The demo application’s architecture

In this application, we’ll create products and retrieve them by their ID, using Amazon DynamoDB as a NoSQL database for the persistence layer. We use Amazon API Gateway, which makes it easy for developers to create, publish, maintain, monitor, and secure APIs. AWS SAM will also be used, which provides a short syntax optimised for the definition of Infrastructure as Code (hereafter IaC) for serverless applications. You can find the full code of Product Controller (ProductController class), DynamoDB persistence logic (DynamoProductDAO class), request stream handler implementation (StreamLambdaHandler class) and IaC based on AWS SAM (template.yaml) in my GitHub repository.

To build this application, we need to run the mvn clean package. To deploy it, we need to run sam deploy -g in the directory where the SAM template (template.yaml) is located. We receive the individual Amazon API Gateway URL as a return. We can use it to create products and retrieve them according to their ID. The interface is secured with the API key (we have to send the following as HTTP header: “X-API-Key: a6ZbcDefQW12BN56WEA7”). To create the product with the ID 1, we have to send the following query with Curl, for example:

curl -m PUT -d ‘{ “id”: 1, “name”: “Print 10×13”, “price”: 0.15 }‘ -H “X-API-Key: a6ZbcDefQW12BN56WEA7” https://{$API_GATEWAY_URL}/prod/products

To query the existing product with ID 1, the following curl query must be sent:

curl -H “X-API-Key: a6ZbcDefQW12BN56WEA7” https://{$API_GATEWAY_URL}/prod/products/1

Now, let’s look at the relevant source code fragments. The Spring Boot 3 ProductController class, annotated with @RestController and @EnableWebMvc defines the methods getProductById and createProduct (Listing 1).

Listing 1

@RequestMapping(path = "/products/{id}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
  public Optional<Product> getProductById(@PathVariable(„id“) String id) {
    return productDao.getProduct(id);
  }

@RequestMapping(path = "/products/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
  public void createProduct(@PathVariable(„id“) String id, @RequestBody Product product) {
    product.setId(id);
    productDao.putProduct(produkt);
  }

The main dependency for the function and translation between the Spring Boot 3 model (web annotation) and AWS Lambda is the dependency on the artefact aws-serverless-java-container-springboot3, which is defined in the pom.xml:
<dependency>

<groupId>com.amazonaws.serverless</groupId>

 <artifactId>aws-serverless-java-container-springboot3</artifactId>

 <version>2.0.0</version>

</dependency>

 

It is based on the Serverless Java Container, which natively supports the API Gateway’s proxy integration models for requests and responses. We can create and inject custom models for methods that use custom mappings.

The easiest way to tie everything together is to define a generic SpringDelegatingLambdaContainerHandler from the aws-serverless-java-container-springboot3 artefact in the AWS SAM template (template.yaml). We’ll also pass the main class of our Spring Boot application (the class annotated with @SpringBootApplication ) as the MAIN_CLASS environment variable (Listing 2).

Listing 2

Handler: com.amazonaws.serverless.proxy.spring.SpringDelegatingLambdaContainerHandler 
Environment:
  Variables:
    MAIN_CLASS: com.amazonaws.serverless.sample.springboot3.Application

SpringDelegatingLambdaContainerHandler stands in as the proxy, receiving all requests and forwarding them to the correct method of our Spring Boot Controller (ProductController class).

Another way to tie everything together is to implement our own request stream handler (StreamLambdaHandler class), which implements the com.amazonaws.services.lambda.runtime.RequestStreamHandler interface to define it in the AWS SAM template (template.yaml) (Listing 3).

Listing 3

Globals:
  Function:
    Handler: software.amazonaws.example.product.handler.StreamLambdaHandler::handleRequest

As a user-defined generic proxy, StreamLambdaHandler receives all requests and forwards them to the correct method of our Spring boot controller (ProductController class). First, in the StreamLambdaHandler, we instantiate the SpringBootLambdaContainerHandler:

SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);

We’ve given Application.class, the main class of our Spring Boot application (the class annotated with @SpringBootApplication ), as a parameter when instantiating the handler.

In the following excerpt from the StreamLambdaHandler class, the input stream is forwarded to the designated method of the handler’s product controller:

@Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
handler.proxyStream(inputStream, outputStream, context);
  }

If we want to implement our own logic, this approach is the preferred method. The next part of this series will show why with Lambda SnapStart Priming.

Now, we’ve looked at all the relevant parts of the source code of the Spring Boot 3 application on AWS Lambda with AWS Serverless Java Container. We also learned how everything works together. Let’s look at cold and warm start metrics and see how we can optimise the cold start.

Measuring strategies for cold and warm starts

For a good overview of cold start in AWS serverless applications, AWS Lambda SnapStart, pre- and post-snapshot hooks (based on CRaC) and priming techniques. See my article in Java Magazin 10.2023 and my article series on dev.to. We’ll build upon these concepts in the following steps.

We want to measure the cold and warm start of the lambda function named GetProductByIdWithSpringBoot32 (see template.yaml for the mapping), which determines products based on an ID, for four different cases:

  1. without activation of AWS (Lambda) SnapStart
  2. with activation of AWS (Lambda) SnapStart, but without the application of priming
  3. with activation of AWS (Lambda) SnapStart and the application of priming of a DynamoDB request
  4. with activation of AWS (Lambda) SnapStart and with application of local priming/proxying of the entire web request

Let’s go through all four cases individually and see what we need to consider.

Without activating AWS (Lambda) SnapStart

We can use the code in the IaC template, based on AWS SAM (template.yaml) in Listing 2 or 3 for this. SnapStart will not be activated in the default variant.

With activation of AWS (Lambda) SnapStart, but without the application of priming

We can adopt the code in the IaC template, based on AWS SAM (template.yaml) in Listing 2 or 3, but we must also activate SnapStart on the Lambda functions, as shown in Listing 4.

Listing 4

Globals:
  Function:
    Handler: software.amazonaws.example.product.handler.StreamLambdaHandler::handleRequest
    SnapStart: ApplyOn: PublishedVersions

With activation of AWS (Lambda) SnapStart and priming of the DynamoDB request

In this case, the IaC based on AWS SAM (template.yaml) looks like Listing 5.

Listing 5

Globals:
  Function:
    Handler: software.amazonaws.example.product.handler. StreamLambdaHandlerWithDynamoDBRequestPriming::handleRequest
    SnapStart: ApplyOn: PublishedVersions 

Activate SnapStart for lambda functions and use the specially written Lambda-RequestStreamHandler implementation called StreamLambdaHandlerWithDynamoDBRequestPriming. This also performs DynamoDB request priming (based on CRaC). Listing 6 shows the relevant class’ source code.

Listing 6

public class StreamLambdaHandlerWithDynamoDBRequestPriming implements RequestStreamHandler, Resource {
  private static final ProductDao productDao = new DynamoProductDao();
  ...
  public StreamLambdaHandlerWithDynamoDBRequestPriming () {
    Core.getGlobalContext().register(this);
  }
  ...
  @Override
    public void beforeCheckpoint(org.crac.Context<? extends Resource> context) throws Exception {
      productDao.getProduct("0");
    } 
  ...
}

 

The class implements the interface org.crac.Resource and registers itself as a CRaC resource in the constructor. Priming occurs in the beforeCheckpoint method, where we ask DynamoDB for the product with the ID 0. This means that most of the call to the lambda function named GetProductByIdWithSpringBoot32 (see template.yaml for mapping) is primed.

We’re not interested in the result at all. This call instantiates all required classes and the expensive one-time initialisation of the HTTP client (default is the Apache HTTP client) and the Jackson Marshall (for the purpose of converting Java objects to JSON and vice versa) is carried out. Since this happens during the Lambda function’s deployment phase when SnapStart is activated and before the snapshot is created, the snapshot will already contain all of this. After the fast snapshot recovery during the lambda call, we gain a lot of performance by priming in case of a cold start (see measurements below).

With activation of AWS (Lambda) SnapStart and application of local priming/proxying of the entire web request

In this case, the IaC based on AWS SAM (template.yaml) looks like Listing 7.

Listing 7

Globals:
  Function:
    Handler: software.amazonaws.example.product.handler. StreamLambdaHandlerWithWebRequestPriming::handleRequest
    SnapStart: ApplyOn: PublishedVersions 

We enable SnapStart for Lambda functions and use the custom written Lambda RequestStreamHandler implementation called StreamLambdaHandlerWithWebRequestPriming, which also performs local priming/proxying of the whole web request (based on CRaC). In doing so, we create the JSON code normally used by the Amazon API Gateway to call Lambda functions (in our case, the Lambda function that queries products by ID from the DynamoDB), but everything happens locally without network communication. Listing 8 shows the class’ relevant source code..

Listing 8

public class StreamLambdaHandlerWithWebRequestPriming implements RequestStreamHandler, Resource {

private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
  static {
    try {
      handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
    } catch (ContainerInitializationException e) {
      ...
    }
  }

  ...
  public StreamLambdaHandlerWithWebRequestPriming () {
    Core.getGlobalContext().register(this);
  }
  ...
  @Override
  public void beforeCheckpoint(org.crac.Context<? extends Resource> context) throws Exception {
    handler.proxyStream(new ByteArrayInputStream(getAPIGatewayRequest().getBytes(StandardCharsets.UTF_8)), new ByteArrayOutputStream(), new MockLambdaContext());
  }
  ...
  private static AwsProxyRequest getAwsProxyRequest () {
    final AwsProxyRequest awsProxyRequest = new AwsProxyRequest ();
    awsProxyRequest.setHttpMethod("GET");
    awsProxyRequest.setPath("/products/0");
    awsProxyRequest.setResource("/products/{id}");
    awsProxyRequest.setPathParameters(Map.of("id","0"));
    final AwsProxyRequestContext awsProxyRequestContext = new AwsProxyRequestContext();
    final ApiGatewayRequestIdentity apiGatewayRequestIdentity= new ApiGatewayRequestIdentity();
    apiGatewayRequestIdentity.setApiKey("blabla");
    awsProxyRequestContext.setIdentity(apiGatewayRequestIdentity);
    awsProxyRequest.setRequestContext(awsProxyRequestContext);

    return awsProxyRequest;
  }
  ...
}

 

The class also implements the org.crac.Resource interface and registers itself as a CRaC resource in the constructor. The getAwsProxyRequest method creates the minimalist web request using the AwsProxyRequest abstraction from the AWS Java Serverless Core Container. We pass the HTTP method (GET) and the path (/product/0) to query the product with the ID 0 from the DynamoDB.

Priming itself occurs in the beforeCheckpoint method. This is where we convert the result of the getAwsProxyRequest method into ByteArrayInputStream and the SpringBootLambdaContainerHandler uses it as a proxy. This internally calls the lambda function named GetProductByIdWithSpringBoot32 (see template.yaml for the mapping), which makes the DynamoDB call (and its priming). This type of priming causes a lot of extra code, which can be significantly simplified with a few utility methods. The decision to use priming is left to us, the developers.

The goal of priming is to instantiate all the required classes and translate the Spring Boot 3 programming model (and the call) into the AWS Lambda programming model (and the call) using the AWS Serverless Java Container for Spring Boot 3. The expensive one-time initialisation of the HTTP client (default is the Apache HTTP client) and the Jackson Marshall (for the purpose of converting Java objects to JSON and vice versa) is also carried out. Since this is done during the Lambda function’s deployment phase when SnapStart is activated and before the snapshot is created, the snapshot will already contain all of this. After the quick snapshot recovery during the Lambda call, we gain a lot of performance by priming in case of cold start (see measurements below).

Presentation of the measurement results of cold and warm starts

The results of the following experiment are based on the reproduction of more than 100 cold starts and around 100,000 warm starts with the lambda function for the duration of one hour. I used the hey load test tool, which is very similar to CURL. But you can use any other tool, like Serverless Artillery or Postman.

Allocate 1024 MB of memory to our Lambda function and use the Java compilation option XX:+TieredCompilation -XX:TieredStopAtLevel=1, which can be assigned to the Lambda function with the environment variable JAVA_TOOL_OPTIONS. This is a good choice for the relatively short-lived lambda functions (Fig. 3). Alternatively, we can also omit the JAVA_TOOL_OPTIONS variable in template.yaml. Then, the default compilation option “tiered compilation” will take effect and produce very good results.

 

 

Fig. 3: Setting the environment variable JAVA_TOOL_OPTIONS with the compilation option of a lambda function

Table 1 shows the measurement results in milliseconds. The abbreviation C stands for cold start and W for warm start. The numbers after this are percentiles. Please note that you must do this yourself for your own use case and you may get (slightly) different results. This can happen for the following reasons:

  • Minor version change to the runtime environment managed by Lambda Amazon Corretto Java 21
  • Improvements creating and restoring Lambda SnapStart snapshots
  • Improvements to the Firecracker VM
  • Impact of the Java memory model (hit and miss L or RAM caches)

Table 1: The measurement results

Just by enabling AWS Lambda SnapStart for the Lambda function, its cold start time is significantly reduced. By also using DynamoDB call priming and especially local priming/proxying for the web request (however, I don’t recommend using this technique in production), we achieved cold starts with measured values only slightly higher than the cold starts for the Lambda-only function without using frameworks. The warm start execution times are also reasonably higher than those measured for the pure lambda function without using frameworks.

Please note the effect of the AWS Snapstart snapshot tiered cache. For SnapStart activation, we mainly get the largest cold start values in the first measurements. Subsequent cold starts have lower values due to the tiered cache. Since I showed the cold starts of the first 100 versions after the Lambda version was published in the table, all cold starts from around the 50th version in the C50 to C90 range are significantly lower. For further details about the technical implementation of AWS SnapStart and its tiered cache, I refer to Mike Danilov’s presentation: “AWS Lambda Under the Hood”.

Conclusion

In this article, we looked at AWS Serverless Java Container and its architecture and explained how we can implement a Spring 3 application on AWS Lambda with the Java 21 runtime environment using AWS Serverless Java Container. We saw that we can largely reuse the existing Spring Boot 3 application completely. We need specially written Lambda functions, especially when activating the AWS Lambda SnapStart and using priming techniques. We also measured cold and warm starts for our lambda functions. We looked at optimisations with AWS Lambda SnapStart, including priming techniques that significantly reduce the lambda function’s cold start.

In the next part of this series, I’ll introduce the AWS Lambda Web Adapter tool. We’ll learn how to develop the Spring Boot 3 application based on this tool and see how to run and optimise it on AWS Lambda.

The post Running Spring Boot 3 on AWS Lambda appeared first on Serverless Architecture Conference.

]]>
Exactly Once in Distributed Systems https://serverless-architecture.io/blog/exactly-once-in-distributed-systems/ Wed, 19 Jun 2024 10:21:58 +0000 https://serverless-architecture.io/?p=88830 If distributed systems are used, as is the case with microservices, distributed data processing (often asynchronous via message queues) is on the daily agenda. When messages are exchanged, they often need to be processed exactly once. As it turns out, it’s not so easy.

The post Exactly Once in Distributed Systems appeared first on Serverless Architecture Conference.

]]>
In distributed systems, asynchronous communication is often covered by a message broker. This is intended to achieve a decoupling between two services that can be scaled separately under certain circumstances. Communication over a message broker is always inherently divided into at least two parts: the producer and consumer.

 

A producer creates messages, as shown in Figure 1, while a consumer processes them. 

A simple message exchange

Fig.1: A simple message exchange

 

The message broker is often a stateful system – a type of database – and mediates messages between the producer and consumer. As a stateful system, a message broker is tasked with storing messages and making them available for retrieval. A producer writes messages to the broker, while a consumer can read them at any time. Exactly once means that the producer produces exactly one message and the consumer processes it exactly once. So very simple, isn’t it?

It can be that simple

As communication between the systems takes place over a network level, it isn’t guaranteed that the systems have the same level of knowledge. There must be a feedback channel that confirms individual operations in order to share a status. This is the only way to make sure that created messages have been received and processed correctly. When updated, the flow looks more like Figure 2.

Confirmations are used to share status between systems

Fig. 2: Confirmations are used to share status between systems

 

If a producer creates a message that will be read by the consumer, the consumer needs confirmation (Fig. 2, step 2). This is the only way for the producer to know that the message is correctly persisted in the broker and that it doesn’t need to be retransmitted.

 

Develop state-of-the-art serverless applications?

Explore the Serverless Development Track

The consumer reads the message and processes it. If processing is completed without errors, the consumer confirms it and the broker doesn’t have to deliver the message again.

Always trouble with communication

Unfortunately, with distributed communication over a network, various communication channels can sometimes break off or errors can occur. This can happen at many points: during creation, before consumption, and afterwards. This exact characteristic makes it difficult, or even impossible, to achieve exactly-once semantics.

 

Suppose a producer produces a message (Fig. 3). To ensure that the broker has saved it, the producer waits for confirmation. If this is not received, there is no guarantee that the broker will actually deliver this message.

 

The producer receives no confirmation and sends the message again

Fig. 3: The producer receives no confirmation and sends the message again

 

The producer must send this message again. This is exactly what happened in Figure 3, step 2, which is why the producer sends another message with the same content in step 3. Since there are now two messages, the consumer processes both messages in steps 4 and 5, probably not “exactly once”. The message is transmitted by the retry mechanism “at least once” – at least once, not exactly once. As you can see in the image, this is because the producer transmits the same message twice to make sure that the broker has confirmed it at least once. This is the only way to ensure that the message isn’t lost.

 

Of course, the confirmation can also be ignored. Step 2 can be omitted, so a retry system would be missing. Therefore, the producer transmits a message without waiting for confirmation from the broker. If the broker cannot process or save the message, it has no way of acknowledging failure or a successful operation. The message would be transmitted “at most once” – at most once, or not at all. Exactly once is fundamentally a problem for distributed applications that work with confirmations.

 

Unfortunately, this isn’t the end of the story when the message is end-to-end, for instance, from the producer to the consumer. There’s also a consumer in this kind of system, which in turn has to process the messages once. Even when it’s guaranteed that the producer generates a message once, one-time processing isn’t a guarantee.

 

A consumer processes the message and then attempts to confirm it

Fig. 4: A consumer processes the message and then attempts to confirm it

 

It is possible for a consumer to read the message in step 3 and process it correctly in step 4, as shown in Figure 4. The confirmation is lost in step 5. This results in the message being processed several times, but at least once.

 

A consumer confirms the message before it is processed

Fig. 5: A consumer confirms the message before it is processed

 

Conversely, confirming the message before processing is also possible. The consumer loads the message and directly confirms it. Then, the message is processed in step 5 from Figure 5. If processing fails now, the message has already been confirmed in step 4 and won’t be read in again. The message has been processed at most once or not at all.

 

As you can see, it’s easy to create at-most-once and at-least-once semantics in the various configurations on both the producer and consumer side. However, exactly once is a difficult problem because of the distributed system. Or is it even impossible?

STAY TUNED!

Learn more about Serverless Architecture Conference

Solutions must be found

In order to achieve exactly-once semantics, processing an application’s messages must support a certain property: Idempotency. Idempotency means that an operation, no matter how often it’s processed, always produces the same result. An example of this principle  is setting a variable in a program’s code. You can implement this with setters or relative mutations.

 

For example, setAge or incrementAge. The operation person.setAge(14); can be executed any number of times in succession. The result always remains the same; it is always 14. person.incrementAge(1), on the other hand, is not idempotent. If this method is executed several times in succession, it will have different results. Namely, it will give one year more after each execution. This property of idempotency is the key to establishing exactly-once semantics.

 

Applied to the previous systems, this means that at-least-once semantics with the idempotence property can lead to exactly-once processing. The confirmation system described above shows how at-least-once semantics can be implemented. What’s missing is a system of idempotence in processing. But how can processing messages be made idempotent?

 

To achieve this, the consumer must be able to obtain a local, synchronized state. In order to obtain the state of a message, it must be uniquely identifiable. This is the only way to enable retrieval and deduplication of the message.

 

Idempotent processing

Fig. 6: Idempotent processing

 

Unlike before, first the consumer saves the message in a local state storage with each call in Figure 6, step 4. At this point, if the message already exists locally, you do not have to save it again. The message is confirmed in step 5. If the confirmation fails and the message is transmitted again, this isn’t a problem. The message can be prevented from being saved again in step 4. This is where idempotency comes to life. When processing, the consumer can decide for themselves if processing is necessary, for example, by introducing a status for a message and querying it locally in step 6. If this is already set to Processed, nothing needs to be done. Conversely, a processed message must correctly update the status.

Conclusion

Distributed systems have a fundamental problem regarding creating exactly-once semantics. At the infrastructure level, a choice can be made between at least once or at most once. Only with the idempotency property can we be sure that messages are processed exactly once from end-to-end at the application level.

 

Of course, this doesn’t come for free. The application itself has to take over managing messages and their status. This isn’t really exactly once either, but it comes very close as a result of the idempotence property.

The post Exactly Once in Distributed Systems appeared first on Serverless Architecture Conference.

]]>
Distributed Tracing: Overhyped or Real Benefit? https://serverless-architecture.io/blog/distributed-tracing-real-benefit/ Wed, 05 Jun 2024 10:21:58 +0000 https://serverless-architecture.io/?p=88814 What about distributed tracing is real and what is fake? Before our article attempts to answer this question, let us first clarify the terminology and provide an overview of the functions of diagnostic technology for microservices and serverless-based architectures.{.preface}

The post Distributed Tracing: Overhyped or Real Benefit? appeared first on Serverless Architecture Conference.

]]>
Modern software architectures based on microservices and serverless architectures bring advantages for application development. Distributed development teams can manage, monitor and operate their individual services more easily. The disadvantage is that they can easily lose sight of the “big picture”. If there are problems in a transaction that is distributed across several microservices, serverless functions and teams, it is almost impossible to distinguish the service responsible for the problem from the affected services. Distributed tracing should provide support here and monitor and make visible the overall system behavior.

Distributed tracing records the paths that a request (from an application or an end user) takes through a distributed application landscape such as microservices or serverless functions. Distributed tracing therefore offers an end-to-end view of the request and the relationships between different services. It is therefore a diagnostic technique that shows how a set of services behaves in order to process individual requests.

The terminology behind distributed tracing

Before we can talk about how distributed tracing works, we need to define some basic terms. To do this, it’s useful to refer to the definitions from OpenTelemetry. OpenTelemetry provides an open source standard and a set of technologies that can be used to capture and export traces from their cloud-native applications and infrastructure.

 

  • Request: A request (also known as a transaction) is the way in which applications communicate in a distributed system landscape. Each service can use a different technology to process the request, e.g. HTTP or MQTT.
  • Trace: A trace represents the end-to-end flow of a request through the various services. Each trace consists of several spans.
  • Span: In distributed tracing, a span indicates a single unit of work that is executed in a trace, e.g. an API call or a database query. Each service in the flow of a particular request through the distributed system contributes a span with a temporal context. A distinction can be made between two different types of span:
    • Root span: The root span (or parent span) is the first span in a trace.
    • Child span: All subsequent spans are referred to as child spans.
  • Instrumentation: In microservices environments, instrumentation usually refers to the code that is added to a service so that data can be collected. Modern tracing tools usually support instrumentation in multiple languages and frameworks. For Java, for example, instrumentation can be implemented with Spring Cloud Sleuth. The standard configurations (automated setup of spans, traces, etc.) are already adopted by the framework without you having to change anything in the code yourself. This allows various distributed tracing tools such as Jaeger or Zipkin to collect and visualize this data.
  • Sampling: Tracing data is often produced in large quantities, it is not only “expensive” to collect and store, but it is also “expensive” to transmit. Sampling is used to strike a balance between monitoring and these costs. It is the process of deciding whether or not to process and export a chip. There are two approaches here:
    • Head-based sampling: The sampling decision is made at the very beginning, when the trace starts.
    • Tail-based sampling: The sampling decision is only made for the respective trace at the end of the process.
  • Trace context: The trace context is used to track the request across services. This is comparable to the shipping label on a parcel. The trace context contains data such as the trace ID, the span ID or various flags such as the sampling flag, which indicates whether the span should be processed or not. The trace context is therefore appended to the metadata of the transport protocol for each request. As can be seen in Table 1, the various providers have different formats for the trace context. Here, however, you should rely on the W3C TraceContext in your implementation. It attempts to establish a standardization in the jungle of different trace contexts, which more and more tool providers are also adapting.
  • Exporter: The component that bundles the data of a trace and exports it to the target backend or an endpoint in the required format. This allows distributed tracing tools such as Jaeger or Zipkin to collect, process and visualize the data.

 

How it all works

Distributed tracing data collection starts from the moment a request reaches a service, e.g. when a user submits a form on the website. The respective instrumentation then triggers the creation of a unique trace ID and a span, in this case the parent span. This first span ID is then the same as the trace ID. With each subsequent service call, the trace ID remains the same and the span ID changes. Each additional span is a child span. This procedure makes it possible to create a tree graph from the data that shows how much time the request has spent in each service, as can be seen in Figure 1 for example.

 

| Trace context solution | Trace context example  |

|—|—|

| Jaeger  | uber-trace-id:{trace-id}:{span-id}:{parent-span-id}:{flags} |

| Zipkin  | X-B3-TraceId:{trace-id}

 

X-B3-SpanId:{span-id}

X-B3-ParentSpanId:{parent-span-id}

X-B3-Sampled:{sampleFlag}  |

| W3C-TraceContext  | traceparent:{version}-{trace-id}-{parent-id}-{trace-flags}  |

 

Table 1: Trace context examples from various providers

 

 

Fig. 1: Example of a tree graph

 

The respective exporter then sends the data to the tracing server. In addition to the span context, further data can be sent along with the trace. This includes tags or logs, for example, which can provide the trace with important additional information for analysis, as shown in Figure 2.

 

Fig. 2: Example of trace data

 

Finally, the spans can be visualized in a tracing tool, for example using a flame graph. The parent span is at the top, with all child spans appearing below it in chronological order. An example of this is shown in Figure 3, which shows which service handled the request when and for how long. In our example, it is noticeable that service D “cost” the most time. It might be worth taking a closer look here in order to derive optimizations.

 

The benefits of distributed tracing

If distributed tracing is used, it should of course bring benefits. The following are the most important advantages of using distributed tracing:

 

  • Minimization of MTTD (mean time to detect) and MTTR (mean time to recover): Distributed tracing helps to reduce the time it takes to identify a problem and the time it takes to fix it. Time is money, so the faster the better.
  • Understanding service relationships: Distributed tracing can create an understanding of which services are related to each other. This provides a picture of the cause-and-effect relationship between the services.
  • Measuring specific user actions: For example, if a user submits a particular form, this action can be tracked across all services. Bottlenecks, errors etc. can thus be assigned to specific processes.
  • Improved collaboration and productivity: A request can run through several services for whose development several teams are responsible. Distributed tracing can efficiently reveal where the error occurred and which team needs to fix the problem.
  • Maintenance of SLAs (Service Level Agreements): Most companies have SLAs, which are contracts with the customer for specific service performance targets. Distributed Tracing can aggregate various performance data and helps to assess whether SLAs are being met.

 

Conclusion

In general, distributed tracing is the best way for DevOps, operations, software and SRE (site reliability engineering) teams to quickly get answers to specific questions in distributed software environments. Once a request involves a handful of microservices, it’s essential to see how all the different services work together. Trace data provides an answer to what is happening across the application. If there were only isolated views of the individual services, there would be no way to reconstruct the flow between individual services. Distributed tracing is a purely technical tool that should be used when you need answers to the following questions:

 

  • Which services are included in the request and for how long?
  • What is the cause of errors in a distributed system?
  • Where are there performance bottlenecks?

 

Distributed tracing makes it possible to quickly identify problems. In addition, it helps not to react only when something has already happened. You should proactively look for problems in order to eliminate them before major difficulties arise. It is therefore quite clear that this is not just hype, but a helpful tool for gaining an overview of the “big picture” in the services jungle.

 

It should be noted that distributed tracing is only one part of the overall monitoring process. It definitely provides information about the services and their relationship to each other. However, it is essential to consider the other two pillars of observability in addition to distributed tracing: distributed metrics and distributed logging. This is the only way to create a business view in addition to the technical view. All three are necessary for an optimal understanding of a distributed application’s performance.

The post Distributed Tracing: Overhyped or Real Benefit? appeared first on Serverless Architecture Conference.

]]>
Watch Keynote: Platformized Approach to Engineering https://serverless-architecture.io/blog/platformized-approach-to-engineering/ Tue, 28 May 2024 10:19:46 +0000 https://serverless-architecture.io/?p=88789 This talk stresses how Platform Engineering promotes organizational sustainability by addressing short-term gains' pitfalls. Through DevOps principles, robust support systems, and standardized architectures, it enables faster delivery and empowers engineers. The case study illustrates its tangible benefits for organizations.

The post Watch Keynote: Platformized Approach to Engineering appeared first on Serverless Architecture Conference.

]]>

Lesley Cordero, Staff Software Engineer at The New York Times, presented a keynote during our latest Serverless Architecture Conference London 2023, highlighting the pivotal role of Platform Engineering in driving organizational sustainability. Here’s a few of her insights:

  1. DevOps Principles: Cordero stressed the importance of DevOps principles in fostering collaboration, automation, and continuous improvement. By aligning teams and processes, organizations can reduce technical debt and enhance efficiency.
  2. Support Structures: Robust support mechanisms are crucial for seamless platform adoption. Effective support structures empower teams to innovate confidently and accelerate delivery timelines.
  3. Platform Architecture: Cordero discussed the significance of standardized architecture in enabling product engineers. Clear guidelines and principles lay the groundwork for innovation and productivity.

Watch the full keynote below to explore actionable strategies for driving organizational sustainability through Platform Engineering. Embrace these principles to navigate today’s tech landscape with resilience and foresight.

The post Watch Keynote: Platformized Approach to Engineering appeared first on Serverless Architecture Conference.

]]>
Watch Session: Generative AI Applications in the Serverless World https://serverless-architecture.io/blog/generative-ai-applications-in-the-serverless-world/ Mon, 06 May 2024 08:36:53 +0000 https://serverless-architecture.io/?p=88769 At the forefront of technological innovation, Generative AI is revolutionizing search capabilities and reaching new milestones in observability. In the session "Generative AI Applications in the Serverless World," presented by Diana Todea at the Serverless Architecture Conference, attendees embark on a journey through the current landscape of Generative AI technology.

The post Watch Session: Generative AI Applications in the Serverless World appeared first on Serverless Architecture Conference.

]]>
At the forefront of technological innovation, Generative AI is revolutionizing search capabilities and reaching new milestones in observability. In the session “Generative AI Applications in the Serverless World,” presented by Diana Todea at the Serverless Architecture Conference, attendees embark on a journey through the current landscape of Generative AI technology.

Todea explores the vast opportunities unlocked by Generative AI, leveraging vector databases, machine learning prowess, and the transformative flexibility of transformers. The session delves into practical use cases, spotlighting OpenTelemetry as the preeminent tool for observability within the serverless framework.

From AI assistants enhancing problem resolution to streamlined log analysis, Todea showcases how Generative AI is accelerating advancements across industries. By the session’s conclusion, participants gain invaluable takeaways:

  • Insight into how Generative AI observability enhances search and data analysis capabilities within the serverless ecosystem.
  • Understanding of how AI-driven automation simplifies implementation and usage.
  • Recognition of the perfected data ingestion and log analysis facilitated by AI assistants.
  • Appreciation for the long-term benefits of implementing Generative AI technology for companies handling data retrieval and processing.

For those seeking to harness the transformative power of Generative AI within the serverless architecture, “Generative AI Applications in the Serverless World” offers illuminating insights and actionable strategies.

Watch the full session featuring Diana Todea below:

The post Watch Session: Generative AI Applications in the Serverless World appeared first on Serverless Architecture Conference.

]]>
Watch Session: Serverless-Side Rendering Micro-Frontends https://serverless-architecture.io/blog/serverless_side_rendering_micro_frontends/ Mon, 15 Apr 2024 10:55:30 +0000 https://serverless-architecture.io/?p=88721 The Serverless Architecture Conference hosted a groundbreaking session titled "Serverless-Side Rendering Micro-Frontends," featuring Luca Mezzalira as the esteemed speaker. Despite the passage of time, the relevance of this topic persists, making it worthy of renewed attention.

The post Watch Session: Serverless-Side Rendering Micro-Frontends appeared first on Serverless Architecture Conference.

]]>

Leveraging Serverless for Micro-Frontends: A Look Back, A Look Ahead

Building large-scale web applications can be a complex endeavor. Traditionally, monolithic architectures presented challenges in terms of scalability, maintainability, and team agility. Distributed architectures, like microservices, have offered solutions, but what about the frontend?

Enter micro-frontends, a revolutionary approach to building UIs that mirrors the benefits of microservices. This concept has been gaining traction since 2016, and at the Serverless Architecture Conference last year in London, Luca Mezzalira presented a thought-provoking session on “Serverless-Side Rendering Micro-Frontends.”

In this talk, Luca explored how to leverage serverless technologies on AWS to construct a server-side rendered micro-frontend application. This approach empowers development teams to work independently while ensuring exceptional performance for your users.

The Power of Micro-Frontends

Luca delves into the core concepts of micro-frontends, including:

  • Building UIs that represent business subdomains
  • Fostering independent development teams
  • Reducing external dependencies
  • Accelerating development and deployment cycles

Serverless and Micro-Frontends: A Winning Combination

The session further explores the advantages of using serverless technologies with micro-frontends. Serverless removes the burden of managing servers, allowing teams to focus on building innovative features. Luca demonstrates how serverless services on AWS can be harnessed to:

  • Render micro-frontends on the server-side
  • Stitch together the micro-frontends to form a cohesive user experience

Maintaining Team Independence and User Experience

A key benefit of Luca’s approach is that it empowers development teams to own and maintain their micro-frontends independently. This fosters agility and innovation. At the same time, server-side rendering ensures that users experience a fast and seamless interaction with your application.

Watch the Full Session for Free!

Ready to dive deeper into server-side rendered micro-frontends with serverless technologies? The full session from Luca Mezzalira is available for free viewing right here on this blog! Gain valuable insights into this powerful architectural approach and discover how it can transform the way you build your next web application.

The post Watch Session: Serverless-Side Rendering Micro-Frontends appeared first on Serverless Architecture Conference.

]]>