AWS Solutions Architect Certification Path – My Take!

Passing AWS certifications can be a challenging yet rewarding journey, especially for those of us looking to strengthen our cloud knowledge and prove our expertise. Having recently achieved the below certifications:

  1. AWS Cloud Practitioner
  2. Solutions Architect Associate, and
  3. Solutions Architect Professional

I am glad to share my preparation strategies, study materials, and some practical tips to help you succeed.

I have around five years of hands-on experience with AWS, so my approach to each exam varied in depth and focus, especially as I progressed through the levels. Below is a breakdown of my preparation, resources used, and insights gained. I hope this post can serve as a helpful resource for those on a similar path.

Exam Preparation Strategy

Each AWS certification exam varies in difficulty, scope, and type of knowledge required. My approach evolved as I moved from Cloud Practitioner to the Professional level, focusing on more complex and architectural concepts as I progressed.

Cloud Practitioner

Objective – This entry-level certification covers foundational AWS knowledge. It’s ideal for anyone who wants to understand the basics of cloud computing and AWS services.

My Approach – I brushed up on cloud fundamentals with Stephane’s Udemy course only on topics I could not answer correctly in the practice tests. I completed a few practice exams from Udemy and AWS Skill Builder. Since this exam was relatively straightforward, I didn’t need an extensive study plan.

Time Investment: Approximately 3 days.

Solutions Architect Associate

Objective – The associate-level exam dives deeper into architectural concepts, covering core AWS services, solutions, and best practices for solution design.

My Approach – Again I took multiple practice tests on Udemy to identify weak areas and brushed up on specific topics through Stephane’s course. Practice tests were essential for me in recognizing question patterns and honing in on areas I needed to revisit. AWS Skill Builder’s tests also provided valuable insights.

Time Investment – Roughly 2 weeks.

Tips

Focus on Keywords - Look out for keywords in questions that signal what AWS is prioritizing (e.g., scalability, cost optimization).

Eliminate Wrong Answers - Often, the wrong answers are easier to spot when you understand the core AWS principles.

AWS Product Preference - AWS often highlights their own solutions (like Aurora over MySQL) in the exams, so keep an eye out for those.

Time Management - Understand that 15 unscored questions exist, often tricky and worded differently, which you don’t need to spend too much time on.
Solutions Architect Professional

Objective – This exam requires a comprehensive understanding of AWS services, advanced architectural concepts, and the ability to design complex solutions that align with AWS best practices.

My Approach – I relied heavily on practice exams from Udemy and AWS Skill Builder, as well as Stephanie’s in-depth course. The professional-level exam requires a strong grasp of architecture, so I used practice exams to pinpoint areas for improvement.

Time Investment – About 3 weeks, including multiple practice tests and thorough review.

Tips

Conceptual Understanding - Passing the Professional exam requires much more than rote memorization. You need to understand AWS services deeply and know how to integrate them effectively.

Mindset - Focus on understanding the material rather than just passing. This exam is difficult to pass using dumps alone – a strong conceptual understanding is necessary.

Long Questions - The questions are mostly long stating different requirements or conditions. Think it this way - The longer the questions, the more clues/hints you get ;). Also read the question first which is generally a one liner at the end of the conditions/requirements. Doing so, you can relate the requirements/conditions better and get a clearer picture when you look at the solutions. This really worked for me!

Time Management - Understand that 10 unscored questions exist, often tricky and worded differently (new services), which you don’t need to spend too much time on. Make use of the 'flags' and do-not spend too much time on a single question. Select an option that you 'think' could be the answer and move on. In the end, if you have time these flagged questions can be re-visited.

Courses Referred

Throughout my preparation, I used a few main resources that I found to be both comprehensive and reliable. Here are the ones that worked well for me:

Stephane’s Courses on Udemy – These were invaluable across all three exams. Stephane’s teaching style is clear and thorough, and his courses cover the exact knowledge needed to understand AWS concepts and pass the exams. Here are the links:

  1. Cloud Practitioner
  2. SAA
  3. SAP

Practice Tests – I used the below practice tests:

  1. Cloud Practitioner – SkillBuilder
  2. SAA – Udemy Stephane, SkillBuilder
  3. SAP – Udemy Tutorialdojo, Udemy Stephane, SkillBuilder

AWS Skill BuilderAWS’s official Skill Builder platform provides practice exams, allowing me to identify knowledge gaps and get familiar with AWS’s question format. I used this (specially for the practice test and knowledge badge learning tests) as this was included with my employers learning plan and I had free access to it. The practice exams here are good and emulate the real exam and even the scoring format.

AWS DocumentationAWS documentation is also very good and I often referred them for specific items.

Additional Tips

Prerequisites

Here’s a quick breakdown of the recommended experience levels for each exam based on my experience:

  1. Cloud Practitioner – You can pass this exam with minimal AWS experience by reviewing course material and practice exams.
  2. Solutions Architect Associate – I’d recommend to spend time to understand basic concepts, practice and get AWS experience or equivalent training. Hands-on experience with core services (like EC2, S3, VPC, and IAM) is beneficial.
  3. Solutions Architect Professional – This exam is challenging and requires an in-depth understanding of AWS architecture, integrations, and troubleshooting. Having hands-on experience designing and deploying solutions on AWS will make it significantly easier.
Registering for the Exam
  1. Check for discounts and Free RetakesAWS periodically offers free retakes or discounts. You’ll also receive a 50% discount voucher for your next exam upon passing, which is helpful if you plan to pursue multiple certifications. Currently (Nov 2024) AWS is offering:
  2. Request Exam Accommodations – Request additional time (30 minutes) if you are not a native english speaker. This is particularly useful for the professional exam where you might face time crunch.

Final Thoughts

Passing AWS certifications is a commitment, but with a structured approach and consistent practice, it’s achievable. For me, the journey from Cloud Practitioner to Solutions Architect Professional provided me with a deeper understanding of AWS services and their application in real-world scenarios.

Each certification level has a different focus, so tailor your preparation strategy accordingly. Use courses, practice exams, and AWS’s own resources like Skill Builder and official AWS documentations.

For each exam, focus on truly understanding the concepts rather than just aiming to pass. This approach not only prepares you for the exam but also strengthens your skills for real-world AWS projects. While it might be tempting to rely on dumps, I strongly recommend focusing on concept mastery, especially for the Associate and Professional exams. Practitioner might be manageable with rote learning, but Associate and Professional levels demand deep understanding.

Good luck, and happy studying! I hope these insights can help you achieve your AWS certification goals.

Manipulating JSON with jq

Recently I was working on a project using AWS CLI and happened to come across some cool jq techniques of manipulating JSON. I will describe what my use-case was, but similar techniques can be applied whenever JSON is involved.

jq is a lightweight JSON processor written in C. You can find more information about the tool and how to install it in their official documentation. It’s quite powerful and is capable of doing quite a lot – i.e parsing, manipulating and processing json files. Now let’s see how it was kinda easy to get my work done using jq.

The Use Case

We use AWS Image Builder to build EC2 AMI’s and this requirement was to explicitly update the ami_name field of an existing Image Builder Distribution Configuration whenever a pipeline is triggered. There are some options in the distribution configuration to specify what we want to name the output AMI, but we wanted it to be very custom (with the base ami version) and dynamic. I will not be talking about how Image Builder works as its a separate topic in itself, however you can read about it here.

The Problem

So to achieve this (using AWS CLI), we just needed to run 2 AWS CLI commands:

  1. Get the existing distribution configuration:
aws imagebuilder get-distribution-configuration --output json --distribution-configuration-arn $distribution_config_arn

The above will return a json response of the existing distribution configuration like below:

{
"requestId": "42b6bcf5-9505-4c42-ad38-7efd8177f2ac",
"distributionConfiguration": {
"arn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"name": "amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}",
"description": "amazon-eks-node-latest image builder pipeline",
"amiTags": {
"family": "amazon-eks-node-latest",
"Name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}"
},
"launchPermission": {}
}
}
],
"dateCreated": "2024-03-06T14:59:37.286Z",
"tags": {
"owner": "platform",
"project": "image-builder",
"env": "prod",
"family": "amazon-eks-node-latest",
"managedby": "terraform"
}
}
}

2. Update only the amiDistributionConfiguration.name field of this distribution configuration.

Now it is not possible to just update one field of the distribution configuration. If you see distributions is a list in the response and to update it we need to get the full response, change the fields we want and then pass this json as an input to the update-distribution-configuration CLI command.

But wait, if you see the input json syntax that update-distribution-configuration expects (shown below), it does not exactly match the response got from the get-distribution-configuration command.

{
"distributionConfigurationArn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "Name {{imagebuilder:buildDate}}",
"description": "An example image name with parameter references"
}
},
{
"region": "eu-east-2",
"amiDistributionConfiguration": {
"name": "My {{imagebuilder:buildVersion}} image {{imagebuilder:buildDate}}"
}
}
]
}

This problem can be true for a variety of other scenarios (AWS or non AWS). Comes jq to the rescue! With jq, it’s just a one line command to process the json and transform it the way we want.

The Solution

The solution is, to transform the response from get-distribution-configuration to the required syntax using jq and then passing it to the update-distribution-configuration command, which can be effectively done with the below commands:

# 1. Get the existing distribution configuration
distribution_config=`aws imagebuilder get-distribution-configuration --output json --distribution-configuration-arn $arn`

# 2. Update the JSON syntax of the response to match the target commands requirements
updated_distribution_config=$(echo "$distribution_config" | jq '{ distributionConfigurationArn: .distributionConfiguration.arn, description: .distributionConfiguration.description, distributions: [.distributionConfiguration.distributions[] | .amiDistributionConfiguration.name = UPDATED_NAME]}')

aws imagebuilder update-distribution-configuration --cli-input-json $updated_distribution_config

The transformation happens in the 2nd command. I have split the command into new-lines to make it clearer. Let’s see how it works:

echo "$distribution_config" | 
jq '{
distributionConfigurationArn: .distributionConfiguration.arn,
description: .distributionConfiguration.description,
distributions: [.distributionConfiguration.distributions[] |.amiDistributionConfiguration.name = UPDATED_NAME]
}'

The above filter tells jq to create a JSON object containing:

  • A distributionConfigurationArn attribute containing the value of .distributionConfiguration.arn
  • A description attribute containing the value of .distributionConfiguration.description
  • A distributions attribute containing a list of distributions from .distributionConfiguration.distributions[]
  • Also we update all .amiDistributionConfiguration.name fields from this list of distributions to our desired value (UPDATED_NAME here) for all occurrences.

As you see, the new json structure is created and existing json is parsed to get the required fields, manipulated to replace the fields items as per requirement in this new structure simultaneously. This ends up creating a json like below, which is what the update-distribution-configuration command expects:

{
"distributionConfigurationArn": "arn:aws:imagebuilder:us-east-1:123456789012:distribution-configuration/amazon-eks-node-latest-pipeline-distribution-config",
"description": "amazon-eks-node-latest image builder pipeline",
"distributions": [
{
"region": "us-east-1",
"amiDistributionConfiguration": {
"name": "UPDATED_NAME" }}",
"description": "amazon-eks-node-latest image builder pipeline",
"amiTags": {
"family": "amazon-eks-node-latest",
"Name": "amazon-eks-node-latest-golden-ami-{{ imagebuilder:buildDate }}"
},
"launchPermission": {}
}
}
]
}

Though this is just a simple example, jq as you see is quite powerful. 🙂

Tackling Terraform – Part 1

This post is specifically to dot down some terraform scripting blocks which is useful for specific use cases and might be a little tricky if you are starting with Terraform. Also acts as a reference for me to refer back 😉

So Terraform is pretty awesome in managing infra as code and the ability to create resources on conditions is something that can be easily achieved (using count or for-each) . I would be discussing a use-case and how to use terraform to create infra effectively for such cases.

  • Let’s consider a scenario where we need to create resources based on some nested condition. For e.g.:
    1. Add routes for multiple subnets in multiple route tables to configure VPC peering or Transit Gateway in AWS.
    2. Add EFS mount targets for multiple subnets for a EFS filesystem in AWS and so on..

In the above cases the common thing is that we need to create resources based on nested conditions. If we take the first case, let’s say we have 3 subnets and 3 RT id’s defined as below:

locals { 
  my_subnets = ["10.1.16.0/21", "10.1.24.0/21", "10.1.32.0/21"]
  rt_ids = ["rtb-01", "rtb-02", "rtb-03"]
}

We need to add an RT entry for each of the subnets in all the route tables specified. Now we can have multiple resource blocks to do this, however that would be difficult to maintain if there is a large number of route tables or subnets. What we can do here is have a nested loop defined and create resources in a single block using for-each.

First, let’s create a flattened list.

flatten ensures that this value is a flat list of objects, rather than a list of lists of objects. distinct is to remove duplicate entries if any.

tgw_routes = distinct(flatten([
    for rt_ids in local.rt_ids : [
      for subnets in local.my_subnets : {
        rt_ids  = rt_ids
        subnets = subnets
      }
    ]
  ]))

tgw_routes is a list, now we project it into a map where each key is unique. We’ll combine the rt_id and subnet keys to produce a single unique key per rt_id.

resource "aws_route" "my_routes" {
  for_each               = { for entry in local.tgw_routes : "${entry.rt_ids}.${entry.subnets}" => entry }
  route_table_id         = each.value.rt_ids
  destination_cidr_block = each.value.subnets
  transit_gateway_id     = "tgw-01"
}

The above effectively creates the routes in every route table for all the subnets. If we see the terraform plan we see resources will be created with the key being rt_id.subnets i.e.

aws_route.my_routes["rtb-01.10.1.16.0/21"]
aws_route.my_routes["rtb-02.10.1.16.0/21"]
aws_route.my_routes["rtb-03.10.1.16.0/21"]

aws_route.my_routes["rtb-01.10.1.24.0/21"]
aws_route.my_routes["rtb-02.10.1.24.0/21"]
aws_route.my_routes["rtb-03.10.1.24.0/21"]

aws_route.my_routes["rtb-01.10.1.32.0/21"]
aws_route.my_routes["rtb-02.10.1.32.0/21"]
aws_route.my_routes["rtb-03.10.1.32.0/21"]

Thus we just used a single resource block to provision required resources rather than 9 blocks. If we want to add/delete, we just remove the entry from the locals rt_id or my_subnet list.

A similar approach can be taken if you encounter similar use-case. The advantage of using for-each is it handles creation/deletion of resources appropriately not affecting other existing resources as the resources are referred not by index (as in count) but by the key that we specified (rt_id.subnets)

Serverless with AWS APIGW and Lambda

Recently I was working on automating a web-hook workflow and ended up using the Serverless framework to implement the solution with AWS API Gateway and Lambda.

It was a breeze to setup, deploy and manage the infra with serverless and the benefits serverless bring are worth mentioning – scalable, fully managed and no overhead to manage the infra!

The below was the use-case:

Set some environment variables in Terraform Cloud Workspace automatically (using notification web-hooks). So an API backend to authenticate the web-hook request, process it, fetch variables from AWS Secret Manager and invoke Terraform rest API to update these secrets was required.

Below is the architecture that was implemented:

Similar use-cases involving web-hooks is very common and a serverless setup makes things really simple to achieve something like above.

One challenge I faced was with Authorization of the web-hook request. In AWS API Gateway, there is a Lambda Authorizer that can be used, however it allows us to validate AuthN tokens from the request header only. In case of Terraform or even Git web-hooks, this is achieved by sending a keyed-hash message authentication code (HMAC) signature of the request body. This is basically hashing the request body using a key. This key is passed as a request header by the web-hook. The same key can be used to hash the request body at consumer’s end and on comparing these 2 signatures, we can determine the authenticity of the request. However the request body is required in this case with is not available to Lambda Authorizers. The only option being left is to have the HMAC signature comparison included as a part of the main Lambda function.

Other trivial settings like API GW request validation, API throttling can be set in the serverless yaml itself. To implement the above, the serverless.yaml is as below:

service: set-terraform-secrets

frameworkVersion: '2'

package:
  patterns:
    - 'node_modules/**'

plugins:
  - serverless-python-requirements
  - serverless-api-gateway-throttling
  
custom:
  pythonRequirements:
    dockerizePip: non-linux
    slim: true
    
  apiGatewayThrottling:
    maxRequestsPerSecond: 10
    maxConcurrentRequests: 5
  
provider:
  name: aws
  runtime: python3.8
  stage: ${sls:stage}
  region: eu-west-1
  lambdaHashingVersion: '20201221'
  iamRoleStatements:
    - Effect: Allow
      Action:
        - secretsmanager:Get*
        - secretsmanager:List*
      Resource: "arn:aws:secretsmanager:**:secret:prod/secrets/*"

functions:
  lambda:
    handler: main.set_tfe_secrets # Set based on your Lambda function handler name
    description: Lambda to set secret variables for Terraform workspaces
    events:
      - http:
          path: /secrets
          method: post
          throttling:
            maxRequestsPerSecond: 100
            maxConcurrentRequests: 50
          request:
            parameters:
              headers:
                X-Notification-Signature: true

Note the X-Notification-Signature header validation that is being done, to accept request only containing this header. This is specific to Terraform Cloud Web-hook.

The plugins are basically to install Python dependencies from requirement.txt file provided in the same source-code location and the throttling api is to set the API rate limiting settings.

The IAM Role is required by the Lambda function to fetch secrets and is specific to my use case. The idea being, all IAM roles can be specified here itself.

Note – Here, the secrets are created manually and not with serverless as they are static resources.

Once this is in place, all that is required to deploy the solution is to run:

sls deploy --stage prod

AWS Transit Gateway – Troubleshooting

Of late I was troubleshooting a connection issue with AWS Transit Gateway (TGW), and I thought of penning down what I learnt and how the issue was gradually resolved.

AWS Transit Gateway is a peering solution which offers a central hub to manage connectivity b/w VPC’s. It’s better than VPC peering when ‘peering at scale‘ is required. VPC peering might hit hard limits and the configurations might be difficult to maintain. The TGW setup is quite straight forward and if I were to put in simple words (without getting into technicalities), the below steps would suffice to get a cross VPC connection working:

Hub Setup (Say in AWS account A – Can be source account as well)

1. Create Transit Gateway (if using AWS console – under VPC -> Transit Gateway)

2. Create Transit Gateway Attachment (specify VPC)

3. Resource Access Manager (RAM) – Share created TGW and add principals of Account B, C etc..

4. Once attachments from other accounts are created, they will appear in TGW Attachment section

5. Create TGW Route Tables for each attachment – The routes should have the destination CIDR listed

6. Add Routes to VPC subnet Route Table, setting source CIDR and destination as TGW

In Destination Account B (Which we want to peer with Account A)

1. Goto RAM and accept shared TGW

2. The shared TGW should be visible now in TGW section

3. Create Attachment specifying VPC and TGW

Note
Transit GW Route Tables/Associations/Propagations can be only seen in hub and not in other accounts

4. Add Routes to VPC subnet Route Table, setting source CIDR and destination as the shared TGW

Similar configurations can be done in other accounts (B, C.. etc) which we want to peer with.

Troubleshooting

For troubleshooting connectivity, the below should be checked (in Hub, Source and Destination accounts):

1. Check TGW Transit Gateway Routes

The destination CIDR -> Attachment route should be present

2. Check VPC Subnet Routes

The destination VPC CIDR -> TGW route must be present

3. Check Subnet NACL

Connections must be allowed (both inbound and outbound) to the destination VPC CIDR

4. Check Instance SG

Connections must be allowed (outbound/inbound depending on connection initiator) to the destination VPC CIDR

It was pretty simple to troubleshot once the above steps were understood. In my case the issue was with the VPC Subnet Route which was missing the destination VPC CIDR -> TGW route.

How did your troubleshooting go 😉

Hope this is of some help to those who are stuck with TGW connectivity issues!

Design a site like this with WordPress.com
Get started