Certified Backstage Associate – My Take!

A straightforward breakdown of what to expect, how to prepare, and what actually helped me pass.


Why This Certification?

The Certified Backstage Associate exam, offered by the CNCF, validates your understanding of Backstage — the open-source framework that has become the de facto standard for building Internal Developer Platforms (IDPs). If you’re working in platform engineering or building developer portals, this certification puts a formal stamp on skills that are increasingly in demand.

I recently passed the exam, and I want to share exactly how I prepared — no fluff, just what worked.


My Background Going In

I had roughly 4 years of hands-on experience working with Backstage and the surrounding ecosystem of tools — software catalogs, TechDocs, scaffolding via templates, plugins, and integrating Backstage into real-world platform engineering workflows.

That experience was a massive advantage. If you’ve been actively building or maintaining a Backstage instance, you already have a strong foundation. But experience alone isn’t enough — the exam tests specific concepts, terminology, and details that you might gloss over in day-to-day work.


Exam Overview

Before diving into preparation, here’s what you’re dealing with:

DetailInfo
FormatMultiple choice
Duration90 minutes
Passing Score75%
DeliveryOnline, proctored
Cost$250 (includes one free retake)
For discounts check this page
Validity2 years

Domain Breakdown

The exam covers the following domains:

  1. Backstage Architecture & Terminology — Core concepts, the app structure, frontend/backend separation
  2. Software Catalog — Entity kinds, catalog-info.yaml, entity relationships, processors, providers
  3. Software Templates (Scaffolder) — Template syntax, actions, custom actions, parameters
  4. TechDocs — The docs-like-code approach, MkDocs integration, generation and publishing strategies
  5. Plugins — Plugin architecture, frontend and backend plugins, extension points
  6. Security & Authentication — Auth providers, identity resolution, permissions framework
  7. Deployment & Configuration — app-config.yaml, database setup, deployment strategies

My Preparation Strategy

1. Lean Into Your Hands-On Experience

If you’ve been working with Backstage, don’t underestimate what you already know. Much of the exam felt like recalling things I’d already debugged, configured, or built.

That said, there were areas where my daily work didn’t go deep enough. I rarely thought about the exact lifecycle of entity processing or the specifics of the permissions framework beyond what I needed. The exam does go there.

Action item: Identify the domains above where your hands-on experience is thin. Focus your study time there.

2. The Udemy Practice Test — My Secret Weapon

The single most impactful resource for my preparation was this Udemy practice test:

Certified Backstage Associate – Practice Exam

Here’s why it was so effective:

  • It mirrors the real exam’s style. The phrasing, the depth of questions, and the way options are structured felt very close to the actual test.
  • It exposes your blind spots. I was confident going in, but the practice test humbled me in areas like the permissions framework and some catalog internals I hadn’t thought about deeply.
  • The explanations are useful. Don’t just check if you got the answer right — read the explanation for every question, even the ones you nailed. Sometimes your reasoning was right for the wrong reasons.

How I used it:

  1. Took the practice test cold (no prep) to get a baseline
  2. Noted every topic where I got questions wrong or guessed
  3. Studied those specific areas using the official docs
  4. Retook the practice test to confirm I’d closed the gaps
  5. Repeated until I was consistently scoring above 85%

3. Official Backstage Documentation

The official Backstage docs are the primary source of truth for this exam. Key sections to study thoroughly:

  • The Software Catalog — Entity descriptor format, well-known entity kinds, relations, substitutions
  • Software Templates — Template YAML structure, built-in and custom actions, parameter schemas
  • TechDocs — Architecture, recommended vs basic setup, MkDocs configuration
  • Plugins — How the plugin system works, frontend vs backend plugins, the new backend system
  • Auth & Permissions — Sign-in resolvers, the permission policy, resource rules
  • Architecture Decision Records (ADRs) — Skim through the key ADRs to understand why certain design decisions were made

4. Know the YAML Inside Out

A significant portion of the exam revolves around YAML configurations — catalog-info.yaml, template definitions, app-config.yaml. Make sure you can:

  • Write a catalog-info.yaml from memory for different entity kinds (Component, API, System, Domain, Resource, Group, User)
  • Understand template parameter schemas and how steps/actions work
  • Know the key configuration options in app-config.yaml

5. Understand the “Why” Behind IDPs

The exam doesn’t just test Backstage mechanics — it also touches on the philosophy of Internal Developer Platforms. Understand:

  • Why organizations adopt IDPs
  • The role of a software catalog in reducing cognitive load
  • Golden paths and how templates enable self-service
  • How Backstage fits into the broader platform engineering landscape

Exam Day Tips

  1. Time is generous. 90 minutes for the number of questions is comfortable. Don’t rush — read each question twice.
  2. Watch for “most correct” answers. Some questions have multiple plausible answers, but one is more correct. Pay attention to qualifiers like “best,” “primary,” “most likely.”
  3. Flag and move on. If a question stumps you, flag it and come back. Often a later question will jog your memory.
  4. Eliminate wrong answers first. On tricky questions, narrowing down from 4 options to 2 makes your odds much better.
  5. Check your environment. Since it’s a proctored exam, make sure your workspace is clean, your webcam works, your ID is ready, and you’ve tested the proctoring software beforehand. Don’t let logistics steal your mental energy.

Study Timeline

Here’s a rough guide depending on your experience level:

Experience LevelSuggested Prep Time
Heavy hands-on (3+ years)1–2 weeks, focused on gaps + practice test
Moderate experience (1–2 years)3–4 weeks, docs review + practice test + hands-on lab
Beginner / conceptual only6–8 weeks, full docs study + build a local Backstage instance + practice test

Resources at a Glance

ResourcePurpose
Udemy Practice TestClosest thing to the real exam — essential
Official Backstage DocsPrimary source of truth
CNCF Exam PageExam logistics, curriculum, registration
Backstage GitHub RepoUseful for understanding plugin architecture and real-world examples
A local Backstage instanceNothing beats hands-on experimentation

Final Thoughts

The Certified Backstage Associate exam is a well-designed certification that tests practical knowledge, not trivia. If you’ve been in the IDP space and have real experience with Backstage, you’re already most of the way there. The gap between “I use this daily” and “I can pass the exam” is mostly about being precise with terminology and knowing the corners of the platform you don’t touch every day.

The Udemy practice test was the highest-ROI resource for me. Combine that with a targeted read-through of the official docs, and you’ll be in great shape.

Good luck — and welcome to the growing community of platform engineers shaping how developers build software.

Container Security: AI-Powered Golden Base Image Auto-Patching

In today’s fast-paced cloud-native world, containerization has become the backbone of modern applications. However, maintaining the security of container images, especially the underlying “golden base images” is a persistent challenge. Manually tracking and patching vulnerabilities is a time-consuming, error-prone process that leaves critical exposure windows open.

This post is about how this challenge was tackled head-on with an innovative, AI-powered solution that automates the detection and patching of critical and high vulnerabilities in our Amazon ECR container images. This not only drastically reduces Mean Time To Patch (MTTP) but also frees up time to focus on innovation rather than reactive security tasks.

The Challenge: Manual Vulnerability Management

Before this solution, the process for handling base image vulnerabilities involved:

  • Regular scans from tools like AWS Inspector.
  • Manual review of findings by security and operations teams.
  • Manually creating Dockerfile patches.
  • Triggering new image builds and testing cycles.

This sequential, human-dependent workflow meant that even with the best intentions, the time from vulnerability detection to deployment of a patched image could span days, sometimes even weeks, especially for non-critical but high-priority vulnerabilities. This was simply not sustainable for our rapidly growing infrastructure.

Solution: AI-Powered, Fully Automated Patching

The solution envisioned a system that could not only detect vulnerabilities but also intelligently propose and execute the patches autonomously. This solution, managed entirely through Infrastructure as Code (IaC) in a dedicated infra-terraform-image-builder repository, integrates several key AWS services to create a seamless, end-to-end automation pipeline.

Here’s how this works:

The Workflow at a Glance

Key Components of Auto-Patching Pipeline

AWS Inspector2: The Sentinel – The first line of defense. AWS Inspector2 continuously scans AWS ECR repositories, detecting critical and high vulnerabilities in our container images. When a new finding emerges or an existing one escalates, Inspector2 alerts the system.

Amazon DynamoDB: The Central Brain & Trigger – Inspector2 findings are streamed into a dedicated DynamoDB table. This acts as the centralized source of truth for all vulnerabilities. Crucially, DynamoDB’s Streams feature directly feeds into the AWS Lambda function, acting as the primary trigger for automation whenever a new or updated critical/high severity finding is recorded.

AWS Lambda: The Orchestrator – This is the heart of the automation. A Python-based AWS Lambda function is invoked by DynamoDB Streams.

  • It parses the finding, identifying the vulnerable package and image.
  • It determines the base image that needs patching.
  • It orchestrates the entire patching process, from AI command generation to signaling the image build.

Amazon ECR: The Image Repository – The central repository for all container images. Lambda interacts with ECR to fetch image metadata (tags, manifest) necessary for the patching process.

AWS Bedrock (Generative AI): The Intelligent Patch Creator – This is where the magic happens! The Lambda function sends the vulnerability details (CVE ID, package name, affected version, base OS) to an AWS Bedrock model. Bedrock, leveraging its generative AI capabilities, intelligently analyzes this information and generates the precise shell commands (e.g., apt-get update && apt-get install -y <package-name>=<fixed-version>) required to patch the vulnerability within the Dockerfile context. This eliminates manual script creation and dramatically speeds up the patching process.

Amazon S3: The Patch Script Store – The dynamically generated patch commands from Bedrock are stored as temporary patch scripts in an S3 bucket. This ensures an auditable trail and provides a robust, accessible location for the next step. The Lambda function updates these patch scripts in S3.

AWS Systems Manager (SSM) Parameter Store: The Signal Tower – To gracefully signal the image build process, SSM Parameter Store is used. The Lambda function updates a specific SSM parameter for the relevant base image. This parameter acts as a signal to the AWS Image Builder pipelines, indicating that a new patch script has been generated and a rebuild is required. The Lambda function updates this SSM parameter, which is then used by the Image Builder pipeline.

AWS Image Builder: The Automated Forge – AWS Image Builder pipelines are configured to monitor specific SSM parameters. Upon detecting an update to the relevant parameter, it springs into action. It retrieves the base image, injects the generated patch script from S3 into the Dockerfile/build process, and then builds a new, patched container image. This newly built image is then pushed back to ECR with updated tags.

FINAL THOUGHTS

This AI-powered golden base image auto-patching solution marks a significant leap forward in container security posture. Embracing generative AI with AWS Bedrock and integrating it with existing AWS ecosystem, not only drastically reduced the exposure window to critical and high vulnerabilities but also empowered teams by taking away a significant operational burden. This approach demonstrates the power of combining modern cloud services with cutting-edge AI to build resilient, secure, and future-proof infrastructure.

Design a site like this with WordPress.com
Get started