Certified Backstage Associate – My Take!

A straightforward breakdown of what to expect, how to prepare, and what actually helped me pass.


Why This Certification?

The Certified Backstage Associate exam, offered by the CNCF, validates your understanding of Backstage — the open-source framework that has become the de facto standard for building Internal Developer Platforms (IDPs). If you’re working in platform engineering or building developer portals, this certification puts a formal stamp on skills that are increasingly in demand.

I recently passed the exam, and I want to share exactly how I prepared — no fluff, just what worked.


My Background Going In

I had roughly 4 years of hands-on experience working with Backstage and the surrounding ecosystem of tools — software catalogs, TechDocs, scaffolding via templates, plugins, and integrating Backstage into real-world platform engineering workflows.

That experience was a massive advantage. If you’ve been actively building or maintaining a Backstage instance, you already have a strong foundation. But experience alone isn’t enough — the exam tests specific concepts, terminology, and details that you might gloss over in day-to-day work.


Exam Overview

Before diving into preparation, here’s what you’re dealing with:

DetailInfo
FormatMultiple choice
Duration90 minutes
Passing Score75%
DeliveryOnline, proctored
Cost$250 (includes one free retake)
For discounts check this page
Validity2 years

Domain Breakdown

The exam covers the following domains:

  1. Backstage Architecture & Terminology — Core concepts, the app structure, frontend/backend separation
  2. Software Catalog — Entity kinds, catalog-info.yaml, entity relationships, processors, providers
  3. Software Templates (Scaffolder) — Template syntax, actions, custom actions, parameters
  4. TechDocs — The docs-like-code approach, MkDocs integration, generation and publishing strategies
  5. Plugins — Plugin architecture, frontend and backend plugins, extension points
  6. Security & Authentication — Auth providers, identity resolution, permissions framework
  7. Deployment & Configuration — app-config.yaml, database setup, deployment strategies

My Preparation Strategy

1. Lean Into Your Hands-On Experience

If you’ve been working with Backstage, don’t underestimate what you already know. Much of the exam felt like recalling things I’d already debugged, configured, or built.

That said, there were areas where my daily work didn’t go deep enough. I rarely thought about the exact lifecycle of entity processing or the specifics of the permissions framework beyond what I needed. The exam does go there.

Action item: Identify the domains above where your hands-on experience is thin. Focus your study time there.

2. The Udemy Practice Test — My Secret Weapon

The single most impactful resource for my preparation was this Udemy practice test:

Certified Backstage Associate – Practice Exam

Here’s why it was so effective:

  • It mirrors the real exam’s style. The phrasing, the depth of questions, and the way options are structured felt very close to the actual test.
  • It exposes your blind spots. I was confident going in, but the practice test humbled me in areas like the permissions framework and some catalog internals I hadn’t thought about deeply.
  • The explanations are useful. Don’t just check if you got the answer right — read the explanation for every question, even the ones you nailed. Sometimes your reasoning was right for the wrong reasons.

How I used it:

  1. Took the practice test cold (no prep) to get a baseline
  2. Noted every topic where I got questions wrong or guessed
  3. Studied those specific areas using the official docs
  4. Retook the practice test to confirm I’d closed the gaps
  5. Repeated until I was consistently scoring above 85%

3. Official Backstage Documentation

The official Backstage docs are the primary source of truth for this exam. Key sections to study thoroughly:

  • The Software Catalog — Entity descriptor format, well-known entity kinds, relations, substitutions
  • Software Templates — Template YAML structure, built-in and custom actions, parameter schemas
  • TechDocs — Architecture, recommended vs basic setup, MkDocs configuration
  • Plugins — How the plugin system works, frontend vs backend plugins, the new backend system
  • Auth & Permissions — Sign-in resolvers, the permission policy, resource rules
  • Architecture Decision Records (ADRs) — Skim through the key ADRs to understand why certain design decisions were made

4. Know the YAML Inside Out

A significant portion of the exam revolves around YAML configurations — catalog-info.yaml, template definitions, app-config.yaml. Make sure you can:

  • Write a catalog-info.yaml from memory for different entity kinds (Component, API, System, Domain, Resource, Group, User)
  • Understand template parameter schemas and how steps/actions work
  • Know the key configuration options in app-config.yaml

5. Understand the “Why” Behind IDPs

The exam doesn’t just test Backstage mechanics — it also touches on the philosophy of Internal Developer Platforms. Understand:

  • Why organizations adopt IDPs
  • The role of a software catalog in reducing cognitive load
  • Golden paths and how templates enable self-service
  • How Backstage fits into the broader platform engineering landscape

Exam Day Tips

  1. Time is generous. 90 minutes for the number of questions is comfortable. Don’t rush — read each question twice.
  2. Watch for “most correct” answers. Some questions have multiple plausible answers, but one is more correct. Pay attention to qualifiers like “best,” “primary,” “most likely.”
  3. Flag and move on. If a question stumps you, flag it and come back. Often a later question will jog your memory.
  4. Eliminate wrong answers first. On tricky questions, narrowing down from 4 options to 2 makes your odds much better.
  5. Check your environment. Since it’s a proctored exam, make sure your workspace is clean, your webcam works, your ID is ready, and you’ve tested the proctoring software beforehand. Don’t let logistics steal your mental energy.

Study Timeline

Here’s a rough guide depending on your experience level:

Experience LevelSuggested Prep Time
Heavy hands-on (3+ years)1–2 weeks, focused on gaps + practice test
Moderate experience (1–2 years)3–4 weeks, docs review + practice test + hands-on lab
Beginner / conceptual only6–8 weeks, full docs study + build a local Backstage instance + practice test

Resources at a Glance

ResourcePurpose
Udemy Practice TestClosest thing to the real exam — essential
Official Backstage DocsPrimary source of truth
CNCF Exam PageExam logistics, curriculum, registration
Backstage GitHub RepoUseful for understanding plugin architecture and real-world examples
A local Backstage instanceNothing beats hands-on experimentation

Final Thoughts

The Certified Backstage Associate exam is a well-designed certification that tests practical knowledge, not trivia. If you’ve been in the IDP space and have real experience with Backstage, you’re already most of the way there. The gap between “I use this daily” and “I can pass the exam” is mostly about being precise with terminology and knowing the corners of the platform you don’t touch every day.

The Udemy practice test was the highest-ROI resource for me. Combine that with a targeted read-through of the official docs, and you’ll be in great shape.

Good luck — and welcome to the growing community of platform engineers shaping how developers build software.

Container Security: AI-Powered Golden Base Image Auto-Patching

In today’s fast-paced cloud-native world, containerization has become the backbone of modern applications. However, maintaining the security of container images, especially the underlying “golden base images” is a persistent challenge. Manually tracking and patching vulnerabilities is a time-consuming, error-prone process that leaves critical exposure windows open.

This post is about how this challenge was tackled head-on with an innovative, AI-powered solution that automates the detection and patching of critical and high vulnerabilities in our Amazon ECR container images. This not only drastically reduces Mean Time To Patch (MTTP) but also frees up time to focus on innovation rather than reactive security tasks.

The Challenge: Manual Vulnerability Management

Before this solution, the process for handling base image vulnerabilities involved:

  • Regular scans from tools like AWS Inspector.
  • Manual review of findings by security and operations teams.
  • Manually creating Dockerfile patches.
  • Triggering new image builds and testing cycles.

This sequential, human-dependent workflow meant that even with the best intentions, the time from vulnerability detection to deployment of a patched image could span days, sometimes even weeks, especially for non-critical but high-priority vulnerabilities. This was simply not sustainable for our rapidly growing infrastructure.

Solution: AI-Powered, Fully Automated Patching

The solution envisioned a system that could not only detect vulnerabilities but also intelligently propose and execute the patches autonomously. This solution, managed entirely through Infrastructure as Code (IaC) in a dedicated infra-terraform-image-builder repository, integrates several key AWS services to create a seamless, end-to-end automation pipeline.

Here’s how this works:

The Workflow at a Glance

Key Components of Auto-Patching Pipeline

AWS Inspector2: The Sentinel – The first line of defense. AWS Inspector2 continuously scans AWS ECR repositories, detecting critical and high vulnerabilities in our container images. When a new finding emerges or an existing one escalates, Inspector2 alerts the system.

Amazon DynamoDB: The Central Brain & Trigger – Inspector2 findings are streamed into a dedicated DynamoDB table. This acts as the centralized source of truth for all vulnerabilities. Crucially, DynamoDB’s Streams feature directly feeds into the AWS Lambda function, acting as the primary trigger for automation whenever a new or updated critical/high severity finding is recorded.

AWS Lambda: The Orchestrator – This is the heart of the automation. A Python-based AWS Lambda function is invoked by DynamoDB Streams.

  • It parses the finding, identifying the vulnerable package and image.
  • It determines the base image that needs patching.
  • It orchestrates the entire patching process, from AI command generation to signaling the image build.

Amazon ECR: The Image Repository – The central repository for all container images. Lambda interacts with ECR to fetch image metadata (tags, manifest) necessary for the patching process.

AWS Bedrock (Generative AI): The Intelligent Patch Creator – This is where the magic happens! The Lambda function sends the vulnerability details (CVE ID, package name, affected version, base OS) to an AWS Bedrock model. Bedrock, leveraging its generative AI capabilities, intelligently analyzes this information and generates the precise shell commands (e.g., apt-get update && apt-get install -y <package-name>=<fixed-version>) required to patch the vulnerability within the Dockerfile context. This eliminates manual script creation and dramatically speeds up the patching process.

Amazon S3: The Patch Script Store – The dynamically generated patch commands from Bedrock are stored as temporary patch scripts in an S3 bucket. This ensures an auditable trail and provides a robust, accessible location for the next step. The Lambda function updates these patch scripts in S3.

AWS Systems Manager (SSM) Parameter Store: The Signal Tower – To gracefully signal the image build process, SSM Parameter Store is used. The Lambda function updates a specific SSM parameter for the relevant base image. This parameter acts as a signal to the AWS Image Builder pipelines, indicating that a new patch script has been generated and a rebuild is required. The Lambda function updates this SSM parameter, which is then used by the Image Builder pipeline.

AWS Image Builder: The Automated Forge – AWS Image Builder pipelines are configured to monitor specific SSM parameters. Upon detecting an update to the relevant parameter, it springs into action. It retrieves the base image, injects the generated patch script from S3 into the Dockerfile/build process, and then builds a new, patched container image. This newly built image is then pushed back to ECR with updated tags.

FINAL THOUGHTS

This AI-powered golden base image auto-patching solution marks a significant leap forward in container security posture. Embracing generative AI with AWS Bedrock and integrating it with existing AWS ecosystem, not only drastically reduced the exposure window to critical and high vulnerabilities but also empowered teams by taking away a significant operational burden. This approach demonstrates the power of combining modern cloud services with cutting-edge AI to build resilient, secure, and future-proof infrastructure.

Backstage Custom Field Extension: Dynamic Field Updates

Backstage makes building developer portals a breeze — especially with Software Templates for scaffolding services. But what if you want to build dynamic forms?

For example:

“Pick an AWS account, and auto-select the right IAM role or environment based on the account.”

Let’s walk through how I built an AwsAccountPicker — a custom field extension that listens to changes in another field and updates itself accordingly.

I am using FieldProps (from @rjsf/utils) instead of the Backstage-specific FieldExtensionComponentProps

Why FieldProps? While Backstage templates usually encourage using FieldExtensionComponentProps, using FieldProps gives you full access to the underlying JSON Schema Form engine — giving you more control, especially for dynamic behaviours like reacting to other fields.

Requirement

A custom field extension (AwsAccountPicker) that:

  • Fetches a list of AWS accounts.
  • Observes another form field (like serviceName) via formContext.formData.
  • Dynamically selects the appropriate account based on a pattern match.
Setting Up the Field Extension

In Backstage frontend app/src/components/scaffolder/customScaffolderExtensions.tsx:

import { FieldExtensionOptions } from '@backstage/plugin-scaffolder-react';
import { AwsAccountPicker } from './AwsAccountPicker';

export const awsAccountPickerExtension: FieldExtensionOptions = {
  name: 'AwsAccountPicker',
  component: AwsAccountPicker,
};

Inside app/src/components/scaffolder/AwsAccountPicker.tsx:

import React, { useEffect, useMemo } from 'react';
import { FieldProps } from '@rjsf/utils';
import { TextField, MenuItem } from '@material-ui/core';

export const AwsAccountPicker = ({
  formData,
  onChange,
  uiSchema,
  registry,
  schema,
}: FieldProps) => {
  const allFormData = registry.formContext?.formData ?? {};
  const uiOptions = uiSchema['ui:options'] || {};

  const accounts = [
    { id: '123456789012', name: 'dev-account' },
    { id: '210987654321', name: 'prod-account' },
  ]; # Use a hook to get this dynamically from an api

  const valueFrom = uiOptions.autofillFrom;
  const outputType = uiOptions.output || 'id';
  const serviceName = allFormData[valueFrom];

  useEffect(() => {
    if (!serviceName) return;
    const expectedAccount = `${serviceName}-account`;
    const match = accounts.find(acc =>
      expectedAccount.toLowerCase().includes(acc.name.toLowerCase())
    );
    if (match && formData !== match[outputType]) {
      onChange(match[outputType]);
    }
  }, [serviceName, formData, onChange, outputType]);

  return (
    <TextField
      select
      label={schema.title}
      value={formData ?? ''}
      onChange={e => onChange(e.target.value)}
    >
      {accounts.map(account => (
        <MenuItem key={account.id} value={account[outputType]}>
          {account.name}
        </MenuItem>
      ))}
    </TextField>
  );
};

Above is a simplified version of the component using FieldProps

How this Works
  • registry.formContext.formData gives you access to other form fields.
  • uiSchema['ui:options'].autofillFrom defines which field to observe.
  • useEffect triggers when the observed field changes and auto-updates the current field.

This approach is clean, does not require extra state management, and works naturally with Backstage scaffolder templates.

Using It in a Template

In template.yaml:

parameters:
  - title: Service Details
    required:
      - serviceName
      - awsAccount
    properties:
      serviceName:
        type: string
        title: Service Name
      awsAccount:
        title: AWS Account
        type: string
        ui:field: AwsAccountPicker
        ui:options:
          autofillFrom: serviceName
          output: id

With this setup, if serviceName = my-service, and your AWS accounts include my-service-account, the form will auto-select the correct account.

The same pattern can be extended to:

  • Conditionally fetch data based on other fields.
  • Support custom inputs with validation.
  • Handle grouped permissions, environments, or region selectors.

Empowering Developers & Streamlining Workflows with Backstage 

In today’s fast-paced digital landscape, efficiency and consistency are paramount for any organization striving to stay ahead of the curve.  

This becomes more crucial in a cross-located and dynamic team setup. Comes in Backstage – A powerful platform designed to centralize service catalog and standardize service and infrastructure creation. 

Backstage, originally developed by Spotify, is an open platform for building developer portals. It unifies infrastructure tooling, services, and documentation to create a streamlined development environment from end to end. 

Why Backstage? 

Backstage offers a comprehensive solution to the challenges we face in managing our ever-expanding ecosystem of services, APIs, and infrastructure. By centralizing service catalog, Backstage provides a unified view of all internal tools and resources, enabling teams to easily discover, consume, and share services across the organization. This not only promotes transparency but also facilitates collaboration and knowledge sharing among teams. 

Furthermore, Backstages’ templating capabilities can revolutionize the way we create and manage services and infrastructure. With customizable templates, we can standardize project setups, ensuring consistency and reducing the time and effort required to onboard new services. This standardization improves efficiency and enhances system reliability and maintainability. 

Journey towards Centralization and Standardization 

The decision to adopt Backstage is often based on the commitment to excellence and continuous improvement. As teams grow, so does the complexity of managing services and infrastructure. Thus comes the need for a centralized platform that can provide visibility and control, over the entire ecosystem while also promoting best practices and standardization. 

With Backstage, we can take a proactive approach to address these challenges. By centralizing service catalog, we are breaking down silos and creating a culture of collaboration and innovation. Teams can now easily discover and leverage existing services, reducing duplication of efforts, and accelerating time-to-market for new services. 

Moreover, by using templates to standardize service and infrastructure creation, we can ensure consistency and reliability across the organization. Whether it is deploying a new microservice or provisioning a cloud infrastructure, teams can rely on predefined templates to guide them through the process, eliminating guesswork and reducing the risk of errors. 

First things First 

Backstage can be overwhelming to start with and involves some initial effort to get started. Also, as it provides a plugin-based framework, supporting custom requirements becomes easier. Focus on the core features first would involve less time for set-up and provide immediate value to developers. 

The features that could be rolled out initially: 

  1. Centralized service catalog with updated and relevant service metadata – A central service catalog. The most important part here is to ensure that the service metadata is relevant and useful, with proper ownerships set, along with having an effortless way to register to the service catalog. 
  1. Standardized service/infrastructure scaffolding – Standardizing and templatizing service creation and infrastructure has a cascading effect on managing services more efficiently, cost optimization, visibility, and operational excellence. The best part of templates is they are easy to create, can be re-used and enforce standards out of the box when setup correctly. 
  1. Overview of the tech ecosystem (aka TechRadar) – The technology ecosystem and overview are very crucial to understanding how we leverage various tools/frameworks, providing better visibility, take decisions on streamlining and identify/evaluate what is important. 

Conclusion 

Adoption of Backstage represents a significant milestone in the journey towards centralization and standardization. By embracing this powerful platform, we can not only improve development workflows but also lay the foundation for future growth and scalability. The possibilities that Backstage brings and the impact it can have on an organization is  huge.

Also promoting a collaborative model, where developers feel empowered, contribute towards Backstage plugin development, and keep improving the offerings is essential.

Stay tuned for more updates on Backstage and how we can set it up in a production environment.

DevOps Interview with Glovo

Recently I was interviewed by Glovo for the position of Senior DevOps Engineer which is one of Barcelona’s fastest growing Gen 2 startup. It was a great experience and I am dotting down the process, my learnings and some tips which should be helpful to people attending interviews with similar companies/roles.

There were 6 rounds and a final feedback session in total:

  1. Introductory HR + Basic technical – 45 mins
  2. Technical round – 1.5 hrs
  3. Codility – Online take home test (a 2 hrs timed test, to be completed within a week)
  4. System architecture & scalability – 1hr
  5. Pair Programming/Scripting – 1.5 hrs
  6. Behavioural round with manager – 1 hr
  7. Final result and feedback with HR – 30 mins

Each round was well planned and structured. Post every round, the next step in the process was explained and feedback was shared over email. Expectation and interview structure/interviewer details were shared with some basic tips before the interview.

Now lets discuss each round in detail.

  1. Introductory HR + Basic technical – 45 mins
    • This was a friendly HR round where I was asked basic stuff about myself, my interests, experience etc.
    • Also knowledge regarding the company was checked and the job profile was discussed in detail.
    • Some technical questions (basic screening questions) were thrown in towards the end.
    • Time was given to answer my questions as well.
    • Overall they checked my interest, communication skills, basic technical knowledge and if I fit the role being offered.
Tips
- Have your CV tailored according to the JD.
- Ensure to read about the company and know basic stats/it's core business etc.
  1. Technical round – 1.5 hrs
    • A document with the expectation and topics to come prepared was given well before the interview.
    • It was a good technical discussion with one of the engineers whom I would be probably working with.
    • Questions were mostly situation based and how I would approach the problem and solve it.
    • Although some direct technical questions were asked as well.
    • Basically my technical expertise and problem solving approach was checked.
Tips
- Answer all approaches you can think of for a given problem.
- Give real world examples to show your expertise on the topic.
- Ensure to brush up basics before the interview.
  1. Codility – Online take home test
    • There were 2 questions and Codility was the platform where the code was to be written and submitted on any language of my choice. (I chose Python3)
    • Some sample questions were shared and it’s very helpful to solve these sample questions to get used to with the Codility environment.
    • It was a 2 hr timed test, so timing was the key.
    • There were 2 questions – 1 simple (which had to be coded in shell script) and 1 medium to hard level difficulty (IMO)
    • Codility also runs performance test on the solution submitted and scores accordingly, so taking care of time and space complexity while writing the code was crucial.
Tips
- Get some competitive coding experience (Join Hackerrank, Leetcode, Codility, Codechef, Codeforce, Geeksforgeeks whichever you find good)
- Ensure to solve some sample questions in Codility to get a hang of it.
- You are also allowed to code in your IDE and copy the code in Codility (if that helps)
  1. System architecture & scalability – 1hr
    • Again the basic expectation and tools to be used (draw.io to draw HLD for eg.) was shared before hand.
    • A common application was asked to be designed end to end with some constraints/conditions pre-defined.
    • Any approach/design/tool/technology was allowed be used, however every selection had to be backed with proper justification. (e.g. monolith vs microservices)
    • It was a highly interactive session and I was allowed to ask as many questions and justify/explain my design decisions.
    • The architecture diagram (HLD) was to be designed in draw.io with screen sharing/video turned on.
    • From DevOps perspective: HA, scalability, DR, CDN, LB, use of DB (CAP theorem) had to be known and explained in detail.
    • Overall it was a very interesting session.
Tips
- Read and have a good grasp on best infra design patterns, HA-DR, CDN, MessageBrokers, Caching, DB's (when to use what), CAP theorem, common bottlenecks etc
- Subscribe to any wesbite/youtube channel that provides system designing courses/videos.(e.g GauravSen's channel)
- Ensure to go through the system design of common apps like - Youtube, Facebook, Swiggy, Whatsapp, Uber. Seldom an out of the box app is asked to be designed. It will mostly be one of the commonly used apps.
  1. Pair Programming/Scripting – 1.5 hrs
    • This was again on Codility, with screen sharing and video turned on.
    • It was pair programming, and the interviewer coded some lines and asked me to code next in some questions/or if stuck.
    • There were 2 interviewers, one active and one passive.
    • Again I felt the questions were given in order of their difficulty, starting with a program to be coded with bash script and the remaining 2 on any language of my choice.
    • It was again very interactive, and all constraints/edge cases were promptly answered when asked.
    • Overall my coding skills, problem solving approach, attention to details, good coding practices was tested.
Tips
- Ensure to ask as many questions as possible to narrow down all edge cases and constraints.
- Use comments, proper naming convention and test cases. (I missed writing test cases and was told in the feedback)
- Write the algo you intend to follow before starting to code.
- Think loud and convey your thought process.
- If you are unable to code the solution, at-least try to provide a high level algo.
  1. Behavioural round with manager – 1 hr
    • This round was taken by one of the engineering managers and focussed totally on my values/communication skills and thought process.
    • Questions were basically on how I would react on a given situation, what I think are good engineering practices, how to promote collaboration, how good I can be in a diverse work environment etc.
    • Also how I handle failure and criticism, my view on feedback coupled with some questions on high level engineering practices/value delivery/scrum processes/CI-CD techniques etc.
    • Being a senior role, questions related to coaching junior developers, conflict management etc. were asked as well.
    • Sufficient time was allotted to answer all questions asked my me in the end.
Tips
- Give examples from your experience wherever possible.
- Prepare well for answering questions like - Your strength, weakness; Your most difficult project; How you handle failure etc.
- Speak slowly and confidently.
- Ask questions you may want, to show your interest and also convey your working style.

The final HR discussion was to give detailed feedback of all the rounds and declare the final outcome. Also a detailed explanation of the company benefits was given.


Fortunately the outcome was positive in my case 🙂

Design a site like this with WordPress.com
Get started