Blog

Certified Backstage Associate – My Take!

A straightforward breakdown of what to expect, how to prepare, and what actually helped me pass.


Why This Certification?

The Certified Backstage Associate exam, offered by the CNCF, validates your understanding of Backstage — the open-source framework that has become the de facto standard for building Internal Developer Platforms (IDPs). If you’re working in platform engineering or building developer portals, this certification puts a formal stamp on skills that are increasingly in demand.

I recently passed the exam, and I want to share exactly how I prepared — no fluff, just what worked.


My Background Going In

I had roughly 4 years of hands-on experience working with Backstage and the surrounding ecosystem of tools — software catalogs, TechDocs, scaffolding via templates, plugins, and integrating Backstage into real-world platform engineering workflows.

That experience was a massive advantage. If you’ve been actively building or maintaining a Backstage instance, you already have a strong foundation. But experience alone isn’t enough — the exam tests specific concepts, terminology, and details that you might gloss over in day-to-day work.


Exam Overview

Before diving into preparation, here’s what you’re dealing with:

DetailInfo
FormatMultiple choice
Duration90 minutes
Passing Score75%
DeliveryOnline, proctored
Cost$250 (includes one free retake)
For discounts check this page
Validity2 years

Domain Breakdown

The exam covers the following domains:

  1. Backstage Architecture & Terminology — Core concepts, the app structure, frontend/backend separation
  2. Software Catalog — Entity kinds, catalog-info.yaml, entity relationships, processors, providers
  3. Software Templates (Scaffolder) — Template syntax, actions, custom actions, parameters
  4. TechDocs — The docs-like-code approach, MkDocs integration, generation and publishing strategies
  5. Plugins — Plugin architecture, frontend and backend plugins, extension points
  6. Security & Authentication — Auth providers, identity resolution, permissions framework
  7. Deployment & Configuration — app-config.yaml, database setup, deployment strategies

My Preparation Strategy

1. Lean Into Your Hands-On Experience

If you’ve been working with Backstage, don’t underestimate what you already know. Much of the exam felt like recalling things I’d already debugged, configured, or built.

That said, there were areas where my daily work didn’t go deep enough. I rarely thought about the exact lifecycle of entity processing or the specifics of the permissions framework beyond what I needed. The exam does go there.

Action item: Identify the domains above where your hands-on experience is thin. Focus your study time there.

2. The Udemy Practice Test — My Secret Weapon

The single most impactful resource for my preparation was this Udemy practice test:

Certified Backstage Associate – Practice Exam

Here’s why it was so effective:

  • It mirrors the real exam’s style. The phrasing, the depth of questions, and the way options are structured felt very close to the actual test.
  • It exposes your blind spots. I was confident going in, but the practice test humbled me in areas like the permissions framework and some catalog internals I hadn’t thought about deeply.
  • The explanations are useful. Don’t just check if you got the answer right — read the explanation for every question, even the ones you nailed. Sometimes your reasoning was right for the wrong reasons.

How I used it:

  1. Took the practice test cold (no prep) to get a baseline
  2. Noted every topic where I got questions wrong or guessed
  3. Studied those specific areas using the official docs
  4. Retook the practice test to confirm I’d closed the gaps
  5. Repeated until I was consistently scoring above 85%

3. Official Backstage Documentation

The official Backstage docs are the primary source of truth for this exam. Key sections to study thoroughly:

  • The Software Catalog — Entity descriptor format, well-known entity kinds, relations, substitutions
  • Software Templates — Template YAML structure, built-in and custom actions, parameter schemas
  • TechDocs — Architecture, recommended vs basic setup, MkDocs configuration
  • Plugins — How the plugin system works, frontend vs backend plugins, the new backend system
  • Auth & Permissions — Sign-in resolvers, the permission policy, resource rules
  • Architecture Decision Records (ADRs) — Skim through the key ADRs to understand why certain design decisions were made

4. Know the YAML Inside Out

A significant portion of the exam revolves around YAML configurations — catalog-info.yaml, template definitions, app-config.yaml. Make sure you can:

  • Write a catalog-info.yaml from memory for different entity kinds (Component, API, System, Domain, Resource, Group, User)
  • Understand template parameter schemas and how steps/actions work
  • Know the key configuration options in app-config.yaml

5. Understand the “Why” Behind IDPs

The exam doesn’t just test Backstage mechanics — it also touches on the philosophy of Internal Developer Platforms. Understand:

  • Why organizations adopt IDPs
  • The role of a software catalog in reducing cognitive load
  • Golden paths and how templates enable self-service
  • How Backstage fits into the broader platform engineering landscape

Exam Day Tips

  1. Time is generous. 90 minutes for the number of questions is comfortable. Don’t rush — read each question twice.
  2. Watch for “most correct” answers. Some questions have multiple plausible answers, but one is more correct. Pay attention to qualifiers like “best,” “primary,” “most likely.”
  3. Flag and move on. If a question stumps you, flag it and come back. Often a later question will jog your memory.
  4. Eliminate wrong answers first. On tricky questions, narrowing down from 4 options to 2 makes your odds much better.
  5. Check your environment. Since it’s a proctored exam, make sure your workspace is clean, your webcam works, your ID is ready, and you’ve tested the proctoring software beforehand. Don’t let logistics steal your mental energy.

Study Timeline

Here’s a rough guide depending on your experience level:

Experience LevelSuggested Prep Time
Heavy hands-on (3+ years)1–2 weeks, focused on gaps + practice test
Moderate experience (1–2 years)3–4 weeks, docs review + practice test + hands-on lab
Beginner / conceptual only6–8 weeks, full docs study + build a local Backstage instance + practice test

Resources at a Glance

ResourcePurpose
Udemy Practice TestClosest thing to the real exam — essential
Official Backstage DocsPrimary source of truth
CNCF Exam PageExam logistics, curriculum, registration
Backstage GitHub RepoUseful for understanding plugin architecture and real-world examples
A local Backstage instanceNothing beats hands-on experimentation

Final Thoughts

The Certified Backstage Associate exam is a well-designed certification that tests practical knowledge, not trivia. If you’ve been in the IDP space and have real experience with Backstage, you’re already most of the way there. The gap between “I use this daily” and “I can pass the exam” is mostly about being precise with terminology and knowing the corners of the platform you don’t touch every day.

The Udemy practice test was the highest-ROI resource for me. Combine that with a targeted read-through of the official docs, and you’ll be in great shape.

Good luck — and welcome to the growing community of platform engineers shaping how developers build software.

Container Security: AI-Powered Golden Base Image Auto-Patching

In today’s fast-paced cloud-native world, containerization has become the backbone of modern applications. However, maintaining the security of container images, especially the underlying “golden base images” is a persistent challenge. Manually tracking and patching vulnerabilities is a time-consuming, error-prone process that leaves critical exposure windows open.

This post is about how this challenge was tackled head-on with an innovative, AI-powered solution that automates the detection and patching of critical and high vulnerabilities in our Amazon ECR container images. This not only drastically reduces Mean Time To Patch (MTTP) but also frees up time to focus on innovation rather than reactive security tasks.

The Challenge: Manual Vulnerability Management

Before this solution, the process for handling base image vulnerabilities involved:

  • Regular scans from tools like AWS Inspector.
  • Manual review of findings by security and operations teams.
  • Manually creating Dockerfile patches.
  • Triggering new image builds and testing cycles.

This sequential, human-dependent workflow meant that even with the best intentions, the time from vulnerability detection to deployment of a patched image could span days, sometimes even weeks, especially for non-critical but high-priority vulnerabilities. This was simply not sustainable for our rapidly growing infrastructure.

Solution: AI-Powered, Fully Automated Patching

The solution envisioned a system that could not only detect vulnerabilities but also intelligently propose and execute the patches autonomously. This solution, managed entirely through Infrastructure as Code (IaC) in a dedicated infra-terraform-image-builder repository, integrates several key AWS services to create a seamless, end-to-end automation pipeline.

Here’s how this works:

The Workflow at a Glance

Key Components of Auto-Patching Pipeline

AWS Inspector2: The Sentinel – The first line of defense. AWS Inspector2 continuously scans AWS ECR repositories, detecting critical and high vulnerabilities in our container images. When a new finding emerges or an existing one escalates, Inspector2 alerts the system.

Amazon DynamoDB: The Central Brain & Trigger – Inspector2 findings are streamed into a dedicated DynamoDB table. This acts as the centralized source of truth for all vulnerabilities. Crucially, DynamoDB’s Streams feature directly feeds into the AWS Lambda function, acting as the primary trigger for automation whenever a new or updated critical/high severity finding is recorded.

AWS Lambda: The Orchestrator – This is the heart of the automation. A Python-based AWS Lambda function is invoked by DynamoDB Streams.

  • It parses the finding, identifying the vulnerable package and image.
  • It determines the base image that needs patching.
  • It orchestrates the entire patching process, from AI command generation to signaling the image build.

Amazon ECR: The Image Repository – The central repository for all container images. Lambda interacts with ECR to fetch image metadata (tags, manifest) necessary for the patching process.

AWS Bedrock (Generative AI): The Intelligent Patch Creator – This is where the magic happens! The Lambda function sends the vulnerability details (CVE ID, package name, affected version, base OS) to an AWS Bedrock model. Bedrock, leveraging its generative AI capabilities, intelligently analyzes this information and generates the precise shell commands (e.g., apt-get update && apt-get install -y <package-name>=<fixed-version>) required to patch the vulnerability within the Dockerfile context. This eliminates manual script creation and dramatically speeds up the patching process.

Amazon S3: The Patch Script Store – The dynamically generated patch commands from Bedrock are stored as temporary patch scripts in an S3 bucket. This ensures an auditable trail and provides a robust, accessible location for the next step. The Lambda function updates these patch scripts in S3.

AWS Systems Manager (SSM) Parameter Store: The Signal Tower – To gracefully signal the image build process, SSM Parameter Store is used. The Lambda function updates a specific SSM parameter for the relevant base image. This parameter acts as a signal to the AWS Image Builder pipelines, indicating that a new patch script has been generated and a rebuild is required. The Lambda function updates this SSM parameter, which is then used by the Image Builder pipeline.

AWS Image Builder: The Automated Forge – AWS Image Builder pipelines are configured to monitor specific SSM parameters. Upon detecting an update to the relevant parameter, it springs into action. It retrieves the base image, injects the generated patch script from S3 into the Dockerfile/build process, and then builds a new, patched container image. This newly built image is then pushed back to ECR with updated tags.

FINAL THOUGHTS

This AI-powered golden base image auto-patching solution marks a significant leap forward in container security posture. Embracing generative AI with AWS Bedrock and integrating it with existing AWS ecosystem, not only drastically reduced the exposure window to critical and high vulnerabilities but also empowered teams by taking away a significant operational burden. This approach demonstrates the power of combining modern cloud services with cutting-edge AI to build resilient, secure, and future-proof infrastructure.

Backstage Custom Field Extension: Dynamic Field Updates

Backstage makes building developer portals a breeze — especially with Software Templates for scaffolding services. But what if you want to build dynamic forms?

For example:

“Pick an AWS account, and auto-select the right IAM role or environment based on the account.”

Let’s walk through how I built an AwsAccountPicker — a custom field extension that listens to changes in another field and updates itself accordingly.

I am using FieldProps (from @rjsf/utils) instead of the Backstage-specific FieldExtensionComponentProps

Why FieldProps? While Backstage templates usually encourage using FieldExtensionComponentProps, using FieldProps gives you full access to the underlying JSON Schema Form engine — giving you more control, especially for dynamic behaviours like reacting to other fields.

Requirement

A custom field extension (AwsAccountPicker) that:

  • Fetches a list of AWS accounts.
  • Observes another form field (like serviceName) via formContext.formData.
  • Dynamically selects the appropriate account based on a pattern match.
Setting Up the Field Extension

In Backstage frontend app/src/components/scaffolder/customScaffolderExtensions.tsx:

import { FieldExtensionOptions } from '@backstage/plugin-scaffolder-react';
import { AwsAccountPicker } from './AwsAccountPicker';

export const awsAccountPickerExtension: FieldExtensionOptions = {
  name: 'AwsAccountPicker',
  component: AwsAccountPicker,
};

Inside app/src/components/scaffolder/AwsAccountPicker.tsx:

import React, { useEffect, useMemo } from 'react';
import { FieldProps } from '@rjsf/utils';
import { TextField, MenuItem } from '@material-ui/core';

export const AwsAccountPicker = ({
  formData,
  onChange,
  uiSchema,
  registry,
  schema,
}: FieldProps) => {
  const allFormData = registry.formContext?.formData ?? {};
  const uiOptions = uiSchema['ui:options'] || {};

  const accounts = [
    { id: '123456789012', name: 'dev-account' },
    { id: '210987654321', name: 'prod-account' },
  ]; # Use a hook to get this dynamically from an api

  const valueFrom = uiOptions.autofillFrom;
  const outputType = uiOptions.output || 'id';
  const serviceName = allFormData[valueFrom];

  useEffect(() => {
    if (!serviceName) return;
    const expectedAccount = `${serviceName}-account`;
    const match = accounts.find(acc =>
      expectedAccount.toLowerCase().includes(acc.name.toLowerCase())
    );
    if (match && formData !== match[outputType]) {
      onChange(match[outputType]);
    }
  }, [serviceName, formData, onChange, outputType]);

  return (
    <TextField
      select
      label={schema.title}
      value={formData ?? ''}
      onChange={e => onChange(e.target.value)}
    >
      {accounts.map(account => (
        <MenuItem key={account.id} value={account[outputType]}>
          {account.name}
        </MenuItem>
      ))}
    </TextField>
  );
};

Above is a simplified version of the component using FieldProps

How this Works
  • registry.formContext.formData gives you access to other form fields.
  • uiSchema['ui:options'].autofillFrom defines which field to observe.
  • useEffect triggers when the observed field changes and auto-updates the current field.

This approach is clean, does not require extra state management, and works naturally with Backstage scaffolder templates.

Using It in a Template

In template.yaml:

parameters:
  - title: Service Details
    required:
      - serviceName
      - awsAccount
    properties:
      serviceName:
        type: string
        title: Service Name
      awsAccount:
        title: AWS Account
        type: string
        ui:field: AwsAccountPicker
        ui:options:
          autofillFrom: serviceName
          output: id

With this setup, if serviceName = my-service, and your AWS accounts include my-service-account, the form will auto-select the correct account.

The same pattern can be extended to:

  • Conditionally fetch data based on other fields.
  • Support custom inputs with validation.
  • Handle grouped permissions, environments, or region selectors.

AWS Solutions Architect Certification Path – My Take!

Passing AWS certifications can be a challenging yet rewarding journey, especially for those of us looking to strengthen our cloud knowledge and prove our expertise. Having recently achieved the below certifications:

  1. AWS Cloud Practitioner
  2. Solutions Architect Associate, and
  3. Solutions Architect Professional

I am glad to share my preparation strategies, study materials, and some practical tips to help you succeed.

I have around five years of hands-on experience with AWS, so my approach to each exam varied in depth and focus, especially as I progressed through the levels. Below is a breakdown of my preparation, resources used, and insights gained. I hope this post can serve as a helpful resource for those on a similar path.

Exam Preparation Strategy

Each AWS certification exam varies in difficulty, scope, and type of knowledge required. My approach evolved as I moved from Cloud Practitioner to the Professional level, focusing on more complex and architectural concepts as I progressed.

Cloud Practitioner

Objective – This entry-level certification covers foundational AWS knowledge. It’s ideal for anyone who wants to understand the basics of cloud computing and AWS services.

My Approach – I brushed up on cloud fundamentals with Stephane’s Udemy course only on topics I could not answer correctly in the practice tests. I completed a few practice exams from Udemy and AWS Skill Builder. Since this exam was relatively straightforward, I didn’t need an extensive study plan.

Time Investment: Approximately 3 days.

Solutions Architect Associate

Objective – The associate-level exam dives deeper into architectural concepts, covering core AWS services, solutions, and best practices for solution design.

My Approach – Again I took multiple practice tests on Udemy to identify weak areas and brushed up on specific topics through Stephane’s course. Practice tests were essential for me in recognizing question patterns and honing in on areas I needed to revisit. AWS Skill Builder’s tests also provided valuable insights.

Time Investment – Roughly 2 weeks.

Tips

Focus on Keywords - Look out for keywords in questions that signal what AWS is prioritizing (e.g., scalability, cost optimization).

Eliminate Wrong Answers - Often, the wrong answers are easier to spot when you understand the core AWS principles.

AWS Product Preference - AWS often highlights their own solutions (like Aurora over MySQL) in the exams, so keep an eye out for those.

Time Management - Understand that 15 unscored questions exist, often tricky and worded differently, which you don’t need to spend too much time on.
Solutions Architect Professional

Objective – This exam requires a comprehensive understanding of AWS services, advanced architectural concepts, and the ability to design complex solutions that align with AWS best practices.

My Approach – I relied heavily on practice exams from Udemy and AWS Skill Builder, as well as Stephanie’s in-depth course. The professional-level exam requires a strong grasp of architecture, so I used practice exams to pinpoint areas for improvement.

Time Investment – About 3 weeks, including multiple practice tests and thorough review.

Tips

Conceptual Understanding - Passing the Professional exam requires much more than rote memorization. You need to understand AWS services deeply and know how to integrate them effectively.

Mindset - Focus on understanding the material rather than just passing. This exam is difficult to pass using dumps alone – a strong conceptual understanding is necessary.

Long Questions - The questions are mostly long stating different requirements or conditions. Think it this way - The longer the questions, the more clues/hints you get ;). Also read the question first which is generally a one liner at the end of the conditions/requirements. Doing so, you can relate the requirements/conditions better and get a clearer picture when you look at the solutions. This really worked for me!

Time Management - Understand that 10 unscored questions exist, often tricky and worded differently (new services), which you don’t need to spend too much time on. Make use of the 'flags' and do-not spend too much time on a single question. Select an option that you 'think' could be the answer and move on. In the end, if you have time these flagged questions can be re-visited.

Courses Referred

Throughout my preparation, I used a few main resources that I found to be both comprehensive and reliable. Here are the ones that worked well for me:

Stephane’s Courses on Udemy – These were invaluable across all three exams. Stephane’s teaching style is clear and thorough, and his courses cover the exact knowledge needed to understand AWS concepts and pass the exams. Here are the links:

  1. Cloud Practitioner
  2. SAA
  3. SAP

Practice Tests – I used the below practice tests:

  1. Cloud Practitioner – SkillBuilder
  2. SAA – Udemy Stephane, SkillBuilder
  3. SAP – Udemy Tutorialdojo, Udemy Stephane, SkillBuilder

AWS Skill BuilderAWS’s official Skill Builder platform provides practice exams, allowing me to identify knowledge gaps and get familiar with AWS’s question format. I used this (specially for the practice test and knowledge badge learning tests) as this was included with my employers learning plan and I had free access to it. The practice exams here are good and emulate the real exam and even the scoring format.

AWS DocumentationAWS documentation is also very good and I often referred them for specific items.

Additional Tips

Prerequisites

Here’s a quick breakdown of the recommended experience levels for each exam based on my experience:

  1. Cloud Practitioner – You can pass this exam with minimal AWS experience by reviewing course material and practice exams.
  2. Solutions Architect Associate – I’d recommend to spend time to understand basic concepts, practice and get AWS experience or equivalent training. Hands-on experience with core services (like EC2, S3, VPC, and IAM) is beneficial.
  3. Solutions Architect Professional – This exam is challenging and requires an in-depth understanding of AWS architecture, integrations, and troubleshooting. Having hands-on experience designing and deploying solutions on AWS will make it significantly easier.
Registering for the Exam
  1. Check for discounts and Free RetakesAWS periodically offers free retakes or discounts. You’ll also receive a 50% discount voucher for your next exam upon passing, which is helpful if you plan to pursue multiple certifications. Currently (Nov 2024) AWS is offering:
  2. Request Exam Accommodations – Request additional time (30 minutes) if you are not a native english speaker. This is particularly useful for the professional exam where you might face time crunch.

Final Thoughts

Passing AWS certifications is a commitment, but with a structured approach and consistent practice, it’s achievable. For me, the journey from Cloud Practitioner to Solutions Architect Professional provided me with a deeper understanding of AWS services and their application in real-world scenarios.

Each certification level has a different focus, so tailor your preparation strategy accordingly. Use courses, practice exams, and AWS’s own resources like Skill Builder and official AWS documentations.

For each exam, focus on truly understanding the concepts rather than just aiming to pass. This approach not only prepares you for the exam but also strengthens your skills for real-world AWS projects. While it might be tempting to rely on dumps, I strongly recommend focusing on concept mastery, especially for the Associate and Professional exams. Practitioner might be manageable with rote learning, but Associate and Professional levels demand deep understanding.

Good luck, and happy studying! I hope these insights can help you achieve your AWS certification goals.

Manipulating JSON with jq

Recently I was working on a project using AWS CLI and happened to come across some cool jq techniques of manipulating JSON. I will describe what my use-case was, but similar techniques can be applied whenever JSON is involved.

jq is a lightweight JSON processor written in C. You can find more information about the tool and how to install it in their official documentation. It’s quite powerful and is capable of doing quite a lot – i.e parsing, manipulating and processing json files. Now let’s see how it was kinda easy to get my work done using jq.

The Use Case

We use AWS Image Builder to build EC2 AMI’s and this requirement was to explicitly update the ami_name field of an existing Image Builder Distribution Configuration whenever a pipeline is triggered. There are some options in the distribution configuration to specify what we want to name the output AMI, but we wanted it to be very custom (with the base ami version) and dynamic. I will not be talking about how Image Builder works as its a separate topic in itself, however you can read about it here.

The Problem

So to achieve this (using AWS CLI), we just needed to run 2 AWS CLI commands:

  1. Get the existing distribution configuration:
aws imagebuilder get-distribution-configuration --output json --distribution-configuration-arn $distribution_config_arn

The above will return a json response of the existing distribution configuration like below:

{
"requestId": "42b6bcf5-9505-4c42-ad38-7efd8177f2ac",
"distributionConfiguration": {
"arn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"name": "amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}",
"description": "amazon-eks-node-latest image builder pipeline",
"amiTags": {
"family": "amazon-eks-node-latest",
"Name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}"
},
"launchPermission": {}
}
}
],
"dateCreated": "2024-03-06T14:59:37.286Z",
"tags": {
"owner": "platform",
"project": "image-builder",
"env": "prod",
"family": "amazon-eks-node-latest",
"managedby": "terraform"
}
}
}

2. Update only the amiDistributionConfiguration.name field of this distribution configuration.

Now it is not possible to just update one field of the distribution configuration. If you see distributions is a list in the response and to update it we need to get the full response, change the fields we want and then pass this json as an input to the update-distribution-configuration CLI command.

But wait, if you see the input json syntax that update-distribution-configuration expects (shown below), it does not exactly match the response got from the get-distribution-configuration command.

{
"distributionConfigurationArn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "Name {{imagebuilder:buildDate}}",
"description": "An example image name with parameter references"
}
},
{
"region": "eu-east-2",
"amiDistributionConfiguration": {
"name": "My {{imagebuilder:buildVersion}} image {{imagebuilder:buildDate}}"
}
}
]
}

This problem can be true for a variety of other scenarios (AWS or non AWS). Comes jq to the rescue! With jq, it’s just a one line command to process the json and transform it the way we want.

The Solution

The solution is, to transform the response from get-distribution-configuration to the required syntax using jq and then passing it to the update-distribution-configuration command, which can be effectively done with the below commands:

# 1. Get the existing distribution configuration
distribution_config=`aws imagebuilder get-distribution-configuration --output json --distribution-configuration-arn $arn`

# 2. Update the JSON syntax of the response to match the target commands requirements
updated_distribution_config=$(echo "$distribution_config" | jq '{ distributionConfigurationArn: .distributionConfiguration.arn, description: .distributionConfiguration.description, distributions: [.distributionConfiguration.distributions[] | .amiDistributionConfiguration.name = UPDATED_NAME]}')

aws imagebuilder update-distribution-configuration --cli-input-json $updated_distribution_config

The transformation happens in the 2nd command. I have split the command into new-lines to make it clearer. Let’s see how it works:

echo "$distribution_config" | 
jq '{
distributionConfigurationArn: .distributionConfiguration.arn,
description: .distributionConfiguration.description,
distributions: [.distributionConfiguration.distributions[] |.amiDistributionConfiguration.name = UPDATED_NAME]
}'

The above filter tells jq to create a JSON object containing:

  • A distributionConfigurationArn attribute containing the value of .distributionConfiguration.arn
  • A description attribute containing the value of .distributionConfiguration.description
  • A distributions attribute containing a list of distributions from .distributionConfiguration.distributions[]
  • Also we update all .amiDistributionConfiguration.name fields from this list of distributions to our desired value (UPDATED_NAME here) for all occurrences.

As you see, the new json structure is created and existing json is parsed to get the required fields, manipulated to replace the fields items as per requirement in this new structure simultaneously. This ends up creating a json like below, which is what the update-distribution-configuration command expects:

{
"distributionConfigurationArn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "UPDATED_NAME" }}",
"description": "amazon-eks-node-latest image builder pipeline",
"amiTags": {
"family": "amazon-eks-node-latest",
"Name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}"
},
"launchPermission": {}
}
}
]
}

Though this is just a simple example, jq as you see is quite powerful. 🙂

Empowering Developers & Streamlining Workflows with Backstage 

In today’s fast-paced digital landscape, efficiency and consistency are paramount for any organization striving to stay ahead of the curve.  

This becomes more crucial in a cross-located and dynamic team setup. Comes in Backstage – A powerful platform designed to centralize service catalog and standardize service and infrastructure creation. 

Backstage, originally developed by Spotify, is an open platform for building developer portals. It unifies infrastructure tooling, services, and documentation to create a streamlined development environment from end to end. 

Why Backstage? 

Backstage offers a comprehensive solution to the challenges we face in managing our ever-expanding ecosystem of services, APIs, and infrastructure. By centralizing service catalog, Backstage provides a unified view of all internal tools and resources, enabling teams to easily discover, consume, and share services across the organization. This not only promotes transparency but also facilitates collaboration and knowledge sharing among teams. 

Furthermore, Backstages’ templating capabilities can revolutionize the way we create and manage services and infrastructure. With customizable templates, we can standardize project setups, ensuring consistency and reducing the time and effort required to onboard new services. This standardization improves efficiency and enhances system reliability and maintainability. 

Journey towards Centralization and Standardization 

The decision to adopt Backstage is often based on the commitment to excellence and continuous improvement. As teams grow, so does the complexity of managing services and infrastructure. Thus comes the need for a centralized platform that can provide visibility and control, over the entire ecosystem while also promoting best practices and standardization. 

With Backstage, we can take a proactive approach to address these challenges. By centralizing service catalog, we are breaking down silos and creating a culture of collaboration and innovation. Teams can now easily discover and leverage existing services, reducing duplication of efforts, and accelerating time-to-market for new services. 

Moreover, by using templates to standardize service and infrastructure creation, we can ensure consistency and reliability across the organization. Whether it is deploying a new microservice or provisioning a cloud infrastructure, teams can rely on predefined templates to guide them through the process, eliminating guesswork and reducing the risk of errors. 

First things First 

Backstage can be overwhelming to start with and involves some initial effort to get started. Also, as it provides a plugin-based framework, supporting custom requirements becomes easier. Focus on the core features first would involve less time for set-up and provide immediate value to developers. 

The features that could be rolled out initially: 

  1. Centralized service catalog with updated and relevant service metadata – A central service catalog. The most important part here is to ensure that the service metadata is relevant and useful, with proper ownerships set, along with having an effortless way to register to the service catalog. 
  1. Standardized service/infrastructure scaffolding – Standardizing and templatizing service creation and infrastructure has a cascading effect on managing services more efficiently, cost optimization, visibility, and operational excellence. The best part of templates is they are easy to create, can be re-used and enforce standards out of the box when setup correctly. 
  1. Overview of the tech ecosystem (aka TechRadar) – The technology ecosystem and overview are very crucial to understanding how we leverage various tools/frameworks, providing better visibility, take decisions on streamlining and identify/evaluate what is important. 

Conclusion 

Adoption of Backstage represents a significant milestone in the journey towards centralization and standardization. By embracing this powerful platform, we can not only improve development workflows but also lay the foundation for future growth and scalability. The possibilities that Backstage brings and the impact it can have on an organization is  huge.

Also promoting a collaborative model, where developers feel empowered, contribute towards Backstage plugin development, and keep improving the offerings is essential.

Stay tuned for more updates on Backstage and how we can set it up in a production environment.

CI/CD with GitHub Actions in Serverless

As discussed in my previous post, setting up a Serverless project is very easy and fast. In this post I will discuss how we can achieve Continuous Integration and Continuous Deployment (aka CI/CD) for such projects using GitHub Actions.

This is something that I use in my Serverless projects, is easy to setup (& as code) and works flawlessly.

You can check how to get started with GitHub Action here.

Overview

Stack

  • python 3.9
  • serverless V3
  • pylint 2.16.2
  • coverage 7.2.2
  • coverage-badge 1.1.0
  • anybadge 1.14.0
  • pytest 7.2.1

Code Structure

├── .github
│   ├── CODEOWNERS
│   └── workflows
│       └── deploy.yaml
├── images
│   └── coverage.svg
│   └── pylint.svg
├── src
│   └── main.py
├── tests
│   └── unit
│       └── test_main.py
│   └── e2e
│       └── test_e2e.py
├── requirements.txt
├── serverless.yml
├── serverless.doc.yml
└── README.md

The .github folder contains the GitHub workflows which will be primarily used for setting up CI/CD.

Before proceeding further, here’s the GitHub Action workflow file.

name: deploy

on:
  push:
    branches:
      - main
    paths:
      - "src/**"
      - "tests/**"
      - "requirements.txt"
      - "serverless.*"
  pull_request:
    branches:
      - main
    paths:
      - "src/**"
      - "tests/**"
      - "requirements.txt"
      - "serverless.*"

permissions:
  id-token: write
  contents: write
  pull-requests: write

jobs:
  deploy:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        python-version: [3.9]
        node-version: [19.7.0]

    steps:
      - name: Checkout Source
        uses: actions/checkout@v3
        with:
          token: ${{ github.token }}

      - name: Setup Python ${{ matrix.python-version }}
        uses: actions/setup-python@v1
        with:
          python-version: ${{ matrix.python-version }}

      - name: Install Python Dependencies
        run: |
          if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}

      - name: Install Serverless Framework
        run: sudo npm install -g serverless

      - name: Lint Code in Stage
        if: github.ref != 'refs/heads/main'
        run: |
          score=$(pylint src/* tests/* | sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p')
          echo "PyLint score is: ${score}"
          if [[ ${score} < 8 ]]; then echo "[ERROR] PyLint score is less than 8! Failing build..."; exit 1; fi
          anybadge -o -l pylint -v $score --file 'images/pylint.svg' 8=red 8.5=orange 9=yellow 10=green

      # AWS AuthN (GitHub OIDC)

      - name: Configure AWS Credentials in Stage
        if: github.ref != 'refs/heads/main'
        uses: aws-actions/configure-aws-credentials@master
        with:
          aws-region: eu-west-1
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/${{ secrets.SLS_DEPLOY_ROLE_NAME }}Stage
          role-session-name: ${{ secrets.SLS_DEPLOY_ROLE_NAME }}Stage

      - name: Configure AWS Credentials in Prod
        if: github.ref == 'refs/heads/main'
        uses: aws-actions/configure-aws-credentials@master
        with:
          aws-region: eu-west-1
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/${{ secrets.SLS_DEPLOY_ROLE_NAME }}Prod
          role-session-name: ${{ secrets.SLS_DEPLOY_ROLE_NAME }}Prod

      - name: Serverless AWS AuthN
        run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}

      # Stage deployment

      - name: Run Unit Tests in Stage
        if: github.ref != 'refs/heads/main'
        run: |
          coverage run -m pytest tests/unit/* --color=yes --verbose
          coverage xml -i --skip-empty
          coverage-badge -f -o images/coverage.svg

      - name: Post Code Coverage in PR
        if: github.ref != 'refs/heads/main'
        uses: orgoro/coverage@v3
        with:
          coverageFile: coverage.xml
          token: ${{github.token}}
          thresholdAll: 0.80
          thresholdNew: 0.90

      - name: Install Serverless Plugins in Stage
        if: github.ref != 'refs/heads/main'
        run: |
          sls plugin install -n serverless-python-requirements --stage stage
          sls plugin install -n serverless-openapi-documenter --stage stage

      - name: Deploy in Stage
        if: github.ref != 'refs/heads/main'
        run: sls deploy --stage stage

      - name: Run E2E Tests in Stage
        if: github.ref != 'refs/heads/main'
        run: env=stage pytest tests/e2e/* --color=yes --verbose

      # OpenApi Spec

      - name: Detect Changes in API Spec in Stage
        if: github.ref != 'refs/heads/main'
        uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            apidoc:
              - 'serverless.doc.yml'

      - name: Generate OpenAPI Spec in Stage
        if: github.ref != 'refs/heads/main' && steps.filter.outputs.apidoc == 'true'
        run: serverless openapi generate -f yaml --stage stage

      # Update PR in Stage

      - name: Update PR in Stage
        uses: stefanzweifel/git-auto-commit-action@v4
        if: github.ref != 'refs/heads/main'
        with:
          file_pattern: 'images/* openapi.yml'
          commit_message: '[DEPLOYER] auto: Update API spec & lint/coverage scores'
          commit_user_name: GitHub Actions <[email protected]>

      # Prod deployment

      - name: Run Unit Tests in Prod
        if: github.ref == 'refs/heads/main'
        run: python -m pytest tests/unit/* --color=yes --verbose

      - name: Install Serverless Plugins in Prod
        if: github.ref == 'refs/heads/main'
        run: |
          sls plugin install -n serverless-python-requirements --stage prod
          sls plugin install -n serverless-openapi-documenter --stage prod

      - name: Deploy in Prod
        if: github.ref == 'refs/heads/main'
        run: |
          sls create_domain --stage prod
          sls deploy --stage prod

      - name: Run E2E Tests in Prod
        if: github.ref == 'refs/heads/main'
        run: env=prod pytest tests/e2e/* --color=yes --verbose

The steps are self-explanatory, however I would like to highlight a few things:

  1. Post Code Coverage
    • This step uses the orgoro GitHub Action, to calculate the coverage of a Python project, and posts the report as a PR comment.
    • This is very useful and the committer/reviewer gets to know if the PR has code-changes which alters the coverage percent. Additionally thresholds can be configured to fail the build in-case of not meeting required coverage percent, ensuring quality.
  2. Generate OpenAPI Spec
    • For a restful Serverless project, API Spec is of great importance, and the same can be handled as code in Serverless using the serverless-openapi-documenter plugin.
    • In this case, the API spec file is re-generated if there is an update done in the serverless.doc.yaml. (change is detected using the dorny-paths-filter action)
    • If you see the step “Update PR in Stage“, the newly generated API Spec is auto added to the PR (using git-auto-commit action) along with Pylint and Coverage badges (used in the README file to indicate the projects overall health)
  3. Authentication
    • For AWS Auth here the OIDC connect is used (which is recommended), however you can simply set AWS user tokens as GitHub secrets and use that.

The result is a fully automated CI/CD pipeline with auto API Spec updates and Code Coverage/Code Lint badges auto pushed in the PR.

If you see the GitHub Action, there are steps dedicated for Stage and Prod to ensure required steps/tests are run in concerned environments only.

This setup works flawlessly and with GitHub branch protection enabled, having Build action status check configured, code-merges only with passing build is guaranteed, preventing buggy codes to main branch and ensuring quality.

Terraform is used to provision static infrastructure (secrets/IAM etc), however that is not discussed in this post.

I will discuss in detail about setting up a Serverless python REST service in the next post including more details, covering all aspects 🙂

Backstage – Authorization

Backstage by Spotify is a platform to build Dev Portal for your organisation. The tool provides a robust framework which can be used to customize and create a Dev Portal, custom fit – as I’d like to say 😉

The tool is evolving rapidly with new plugins/features being added quite often. Recently I worked to implement custom Authorization to access Backstage, which I found really interesting and thus am sharing my experience on how I did it.

We have Backstage hosted in K8s (EKS) with GitOps based deployments managed through ArgoCD. We also have Templates (with custom scaffolder actions) to provision new microservices. While setting up the AuthZ, granular access to these Templates was one of our requirements and we wanted to manage its access in a proper way.

Backstage has several AuthN integrations out of the box and it’s pretty easy to set that up. In our case we use OneLogin for AuthN. For AuthZ, Backstage provides a permissions plugin. I am not discussing how to configure and setup that up as it’s documented well.

An authenticated user gets allotted some default user groups. To have a proper group based AuthZ the first thing we did was to develop a custom sign-in resolver to set custom groups for authenticated users.

// auth.ts custom login snippet
onelogin: providers.onelogin.create({
        signIn: {
          // Custom sign-in resolver
          async resolver({ result }, ctx) {
            const email = result.fullProfile.username ?? '';
            const [id] = email.split('@');
            let entity:any;
 
            try {
              ({ entity } = await ctx.findCatalogUser({
                entityRef: {
                  kind: 'user', 
                  namespace: BACKSTAGE_NAMESPACE, 
                  name: id
                }
              }));      
            }
            catch (error)  {
              if(error instanceof NotFoundError){
                entity = {
                  kind: 'user',
                  namespace: BACKSTAGE_NAMESPACE,
                  name: id,
                };
              }
            }

            // Set default group ownerships
            const membershipRefs = entity.relations
              ?.filter(
                (r:any) => r.type === RELATION_MEMBER_OF && r.targetRef.startsWith('group:'),
              )
              .map((r:any) => r.targetRef) ?? [];

            const ownershipRefs:string[] = Array.from(new Set([`group:${BACKSTAGE_NAMESPACE}/default`, ...membershipRefs]));

            return ctx.issueToken({
              claims: {
                sub: stringifyEntityRef(entity),
                ent: ownershipRefs,
              },
            });
          },
        },
      }),

The above resolver adds default memberships as below:

  1. If authenticated:
    • group:/backstage/default
    • Explicit groups set for the user entity, created in Backstage
  2. If not authenticated (Guest): user:/default/guest

Our approach here is to manage the AuthZ from Backstage by creating the User/Group entities in Backstage and mapping the authenticated user to inherit these specified groups. To achieve this, there should be an identical id (which the user uses to authenticate) and the one created in Backstage. In the above example, this id is populated based on the user’s email, by using the part preceding @ of the email.

Thus we have users.yaml and groups.yaml which creates the users and groups at Backstage end, and maps groups with the user, like below:

apiVersion: backstage.io/v1alpha1
kind: User
metadata:
  namespace: backstage
  name: username.surname
spec:
  profile:
    displayName: Username Surname
  memberOf:
    - admin
    ...
apiVersion: backstage.io/v1alpha1
kind: Group
metadata:
   namespace: backstage
   name: admin
spec:
   type: team
   profile:
      displayName: Admin
   children: []

Now that we have the Users and Groups ready with correct memberships defined, the next thing is to setup the permission policy.

As stated, we use a custom permission rule to provided access to Templates based on entity tags. (ABAC).

// permission-rule.ts snippet
// Custom permission rule to authorize based on entity tags
export const isGroupInTagRule = createCatalogPermissionRule({
  name: 'IS_GROUP_IN_TAG',
  description: 'Checks if an entity tag contains an user group, to allow access to the entity',
  resourceType: 'catalog-entity',
  apply: (resource: Entity, claims:string[]) => {
    if (!resource.metadata.tags) {
     return false;
    }
    return resource.metadata.tags
      .some(tag => claims.includes(`group:backstage/${tag.split(':')[1]}`))
  },
  toQuery: (claims:string[]) => ({
    key: 'metadata.tags',
    values: claims.map(group => `group:${group.split('/')[1]}`),
  }),
});

export const isGroupInTag = createConditionFactory(isGroupInTagRule);

You can refer the detailed process to define custom permission rules here, which will be required with the above change to make this work.

So the above permission rule now provides us with a isGroupInTagRule function that can be used in the permission policy to authorize entities based on tags.

Finally let’s have a look at the permission policy:

// permission.ts defining custom authZ policy
class CustomPermissionPolicy implements PermissionPolicy {
  async handle(request: PolicyQuery, user: BackstageIdentityResponse): Promise<PolicyDecision> {

    // Exempt admin from permission checks
    if (isAdmin(user)) {
      return { result: AuthorizeResult.ALLOW };
    }

    // RO permissions
    if (isPermission(request.permission, catalogEntityReadPermission)) {
      return createCatalogConditionalDecision(request.permission, {
        anyOf: [
          catalogConditions.isEntityKind([
            'Domain',
            'Component',
            'System',
            'API',
            'Group',
            'User',
            'Resource',
            'Location',
          ]),
          { // Template RO permission only to groups specified in tags
            allOf: [
              catalogConditions.isEntityKind(['Template']),
              isGroupInTag(
	            user?.identity.ownershipEntityRefs ?? [],
	          ),
            ],
          },
        ],
      });
    }

    // Deny explicit creat/delete permissions
    if (isPermission(request.permission, catalogEntityDeletePermission) || 
        isPermission(request.permission, catalogEntityCreatePermission)) {
      return { result: AuthorizeResult.DENY };
    }

    return { result: AuthorizeResult.ALLOW };
  }
}

// Function to check if user has admin group membership
const isAdmin = (user: BackstageIdentityResponse):boolean => {
  if (typeof(user) === 'object') {
    return user.identity.ownershipEntityRefs.includes('group:backstage/admin');
  }
  return false;
}

The above policy:

  1. Allows full access to users with admin group membership
  2. RO access to all entities (except Templates)
  3. Read access to Templates to users with membership to groups specified in the Template tag

Here is how the Template definition snippet looks, which provides access to users only with the staff membership:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: golden-path-template
  title: Golden Path Template
  description: Create a new microservice from scratch following the golden path
  tags:
    - recommended
    - microservice
    - "group:staff"
...

That’s all! This works perfectly, is scalable and achieves its purpose well. 🙂

Tackling Terraform – Part 1

This post is specifically to dot down some terraform scripting blocks which is useful for specific use cases and might be a little tricky if you are starting with Terraform. Also acts as a reference for me to refer back 😉

So Terraform is pretty awesome in managing infra as code and the ability to create resources on conditions is something that can be easily achieved (using count or for-each) . I would be discussing a use-case and how to use terraform to create infra effectively for such cases.

  • Let’s consider a scenario where we need to create resources based on some nested condition. For e.g.:
    1. Add routes for multiple subnets in multiple route tables to configure VPC peering or Transit Gateway in AWS.
    2. Add EFS mount targets for multiple subnets for a EFS filesystem in AWS and so on..

In the above cases the common thing is that we need to create resources based on nested conditions. If we take the first case, let’s say we have 3 subnets and 3 RT id’s defined as below:

locals { 
  my_subnets = ["10.1.16.0/21", "10.1.24.0/21", "10.1.32.0/21"]
  rt_ids = ["rtb-01", "rtb-02", "rtb-03"]
}

We need to add an RT entry for each of the subnets in all the route tables specified. Now we can have multiple resource blocks to do this, however that would be difficult to maintain if there is a large number of route tables or subnets. What we can do here is have a nested loop defined and create resources in a single block using for-each.

First, let’s create a flattened list.

flatten ensures that this value is a flat list of objects, rather than a list of lists of objects. distinct is to remove duplicate entries if any.

tgw_routes = distinct(flatten([
    for rt_ids in local.rt_ids : [
      for subnets in local.my_subnets : {
        rt_ids  = rt_ids
        subnets = subnets
      }
    ]
  ]))

tgw_routes is a list, now we project it into a map where each key is unique. We’ll combine the rt_id and subnet keys to produce a single unique key per rt_id.

resource "aws_route" "my_routes" {
  for_each               = { for entry in local.tgw_routes : "${entry.rt_ids}.${entry.subnets}" => entry }
  route_table_id         = each.value.rt_ids
  destination_cidr_block = each.value.subnets
  transit_gateway_id     = "tgw-01"
}

The above effectively creates the routes in every route table for all the subnets. If we see the terraform plan we see resources will be created with the key being rt_id.subnets i.e.

aws_route.my_routes["rtb-01.10.1.16.0/21"]
aws_route.my_routes["rtb-02.10.1.16.0/21"]
aws_route.my_routes["rtb-03.10.1.16.0/21"]

aws_route.my_routes["rtb-01.10.1.24.0/21"]
aws_route.my_routes["rtb-02.10.1.24.0/21"]
aws_route.my_routes["rtb-03.10.1.24.0/21"]

aws_route.my_routes["rtb-01.10.1.32.0/21"]
aws_route.my_routes["rtb-02.10.1.32.0/21"]
aws_route.my_routes["rtb-03.10.1.32.0/21"]

Thus we just used a single resource block to provision required resources rather than 9 blocks. If we want to add/delete, we just remove the entry from the locals rt_id or my_subnet list.

A similar approach can be taken if you encounter similar use-case. The advantage of using for-each is it handles creation/deletion of resources appropriately not affecting other existing resources as the resources are referred not by index (as in count) but by the key that we specified (rt_id.subnets)

Serverless with AWS APIGW and Lambda

Recently I was working on automating a web-hook workflow and ended up using the Serverless framework to implement the solution with AWS API Gateway and Lambda.

It was a breeze to setup, deploy and manage the infra with serverless and the benefits serverless bring are worth mentioning – scalable, fully managed and no overhead to manage the infra!

The below was the use-case:

Set some environment variables in Terraform Cloud Workspace automatically (using notification web-hooks). So an API backend to authenticate the web-hook request, process it, fetch variables from AWS Secret Manager and invoke Terraform rest API to update these secrets was required.

Below is the architecture that was implemented:

Similar use-cases involving web-hooks is very common and a serverless setup makes things really simple to achieve something like above.

One challenge I faced was with Authorization of the web-hook request. In AWS API Gateway, there is a Lambda Authorizer that can be used, however it allows us to validate AuthN tokens from the request header only. In case of Terraform or even Git web-hooks, this is achieved by sending a keyed-hash message authentication code (HMAC) signature of the request body. This is basically hashing the request body using a key. This key is passed as a request header by the web-hook. The same key can be used to hash the request body at consumer’s end and on comparing these 2 signatures, we can determine the authenticity of the request. However the request body is required in this case with is not available to Lambda Authorizers. The only option being left is to have the HMAC signature comparison included as a part of the main Lambda function.

Other trivial settings like API GW request validation, API throttling can be set in the serverless yaml itself. To implement the above, the serverless.yaml is as below:

service: set-terraform-secrets

frameworkVersion: '2'

package:
  patterns:
    - 'node_modules/**'

plugins:
  - serverless-python-requirements
  - serverless-api-gateway-throttling
  
custom:
  pythonRequirements:
    dockerizePip: non-linux
    slim: true
    
  apiGatewayThrottling:
    maxRequestsPerSecond: 10
    maxConcurrentRequests: 5
  
provider:
  name: aws
  runtime: python3.8
  stage: ${sls:stage}
  region: eu-west-1
  lambdaHashingVersion: '20201221'
  iamRoleStatements:
    - Effect: Allow
      Action:
        - secretsmanager:Get*
        - secretsmanager:List*
      Resource: "arn:aws:secretsmanager:**:secret:prod/secrets/*"

functions:
  lambda:
    handler: main.set_tfe_secrets # Set based on your Lambda function handler name
    description: Lambda to set secret variables for Terraform workspaces
    events:
      - http:
          path: /secrets
          method: post
          throttling:
            maxRequestsPerSecond: 100
            maxConcurrentRequests: 50
          request:
            parameters:
              headers:
                X-Notification-Signature: true

Note the X-Notification-Signature header validation that is being done, to accept request only containing this header. This is specific to Terraform Cloud Web-hook.

The plugins are basically to install Python dependencies from requirement.txt file provided in the same source-code location and the throttling api is to set the API rate limiting settings.

The IAM Role is required by the Lambda function to fetch secrets and is specific to my use case. The idea being, all IAM roles can be specified here itself.

Note – Here, the secrets are created manually and not with serverless as they are static resources.

Once this is in place, all that is required to deploy the solution is to run:

sls deploy --stage prod

Design a site like this with WordPress.com
Get started