This repo is a guide to taking solutions architect associate cert in 2021. The main overview of the exam and AWS official site.
- acloudguru hands on course
- These amazing practice exams with explanations
- These amazing study guides/cheat sheets
- Last practice exam took was the acloud guru practice exam SAA-CO2
Expand
- 100 - 1000 with minimum 720
- scaled scoring model
- 15 unscored questions that do not affect your score
- Unanswered questions are scored as incorrect; there is no penalty for guessing
- Multiple-choice: Has one correct response and three incorrect responses (distractors).
- Multiple-response: Has two or more correct responses out of five or more options
Domain 1: Design Resilient Architectures 30%
1.1 Design a multi-tier architecture solution
- Determine a solution design based on access patterns.
- Determine a scaling strategy for components used in a design.
- Select an appropriate database based on requirements.
- Select an appropriate compute and storage service based on requirements.

1.2 Design highly available and/or fault-tolerant architectures
-
Determine the amount of resources needed to provide a fault-tolerant architecture across Availability Zones.
-
Select a highly available configuration to mitigate single points of failure.
-
Apply AWS services to improve the reliability of legacy applications when application changes are not possible.
-
Select an appropriate disaster recovery strategy to meet business requirements.
-
Identify key performance indicators to ensure the high availability of the solution. 1.3 Design decoupling mechanisms using AWS services
-
Determine which AWS services can be leveraged to achieve loose coupling of components.
-
Determine when to leverage serverless technologies to enable decoupling.
1.4 Choose appropriate resilient storage
- Define a strategy to ensure the durability of data.
- Identify how data service consistency will affect the operation of the application.
- Select data services that will meet the access requirements of the application.
- Identify storage services that can be used with hybrid or non-cloud-native applications.

Domain 2: Define Performant Architectures 28%
2.1 Identify elastic and scalable compute solutions for a workload
- Select the appropriate instance(s) based on compute, storage, and networking requirements.
- Choose the appropriate architecture and services that scale to meet performance requirements.
- Identify metrics to monitor the performance of the solution.
2.2 Select high-performing and scalable storage solutions for a workload
- Select a storage service and configuration that meets performance demands.
- Determine storage services that can scale to accommodate future needs.
2.3 Select high-performing networking solutions for a workload
- Select appropriate AWS connectivity options to meet performance demands.
- Select appropriate features to optimize connectivity to AWS public services.
- Determine an edge caching strategy to provide performance benefits.
- Select appropriate data transfer service for migration and/or ingestion.
2.4 Choose high-performing database solutions for a workload
- Select an appropriate database scaling strategy.
- Determine when database caching is required for performance improvement.
- Choose a suitable database service to meet performance needs.
Domain 3: Specify Secure Applications and Architectures 24%
3.1 Design secure access to AWS resources
- Determine when to choose between users, groups, and roles.
- Interpret the net effect of a given access policy.
- Select appropriate techniques to secure a root account.
- Determine ways to secure credentials using features of AWS IAM.
- Determine the secure method for an application to access AWS APIs.
- Select appropriate services to create traceability for access to AWS resources.

3.2 Design secure application tiers
-
Given traffic control requirements, determine when and how to use security groups and network ACLs. TIP: "Security Group acts as a firewall, it will only control both inbound and outbound traffic at the instance level and not on the whole VPC."

-
Determine a network segmentation strategy using public and private subnets.
-
Select the appropriate routing mechanism to securely access AWS service endpoints or internet-based resources from Amazon VPC.
-
Select appropriate AWS services to protect applications from external threats.
3.3 Select appropriate data security options
- Determine the policies that need to be applied to objects based on access patterns.
- Select appropriate encryption options for data at rest and in transit for
Domain 4: Design Cost-Optimized Architectures 18%
4.1 Identify cost-effective storage solutions
- Determine the most cost-effective data storage options based on requirements.
- Apply automated processes to ensure that data over time is stored on storage tiers that minimize costs.
Types
AWS Services and Features
Analytics:
-
Amazon Athena
-
Amazon Elasticsearch Service (Amazon ES)
-
AWS Glue
-
Amazon QuickSight
AWS Billing and Cost Management:
- AWS Budgets
- Cost Explorer
Application Integration:
- Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service SQS and Cheat Sheet
Compute:
-
AWS Elastic Beanstalk
-
AWS Fargate
Database:
Management and Governance:
-
AWS Backup
-
Amazon EventBridge (Amazon CloudWatch Events)
-
AWS Organizations
-
AWS Trusted Advisor
Migration and Transfer:
-
AWS Database Migration Service (AWS DMS)
-
AWS Migration Hub
-
AWS Server Migration Service (AWS SMS)
Networking and Content Delivery:
-
AWS Direct Connect
-
AWS Global Accelerator
-
AWS Transit Gateway
Security, Identity, and Compliance:
- AWS Certificate Manager (ACM)
- AWS Directory Service
- Amazon GuardDuty
- AWS Identity and Access Management (IAM) and IAM DB AUTH
- Amazon Inspector
- AWS Key Management Service (AWS KMS)
- Amazon Macie
- AWS Secrets Manager
- AWS Shield
- AWS Single Sign-On
- AWS WAF
Storage:
Expand
aws configure- Copies file from local to bucket
aws s3 cp <path> s3://<bucket> - List buckets
aws s3 ls - List Bucket Content:
aws s3 ls s3://<bucket> - Create s3 bucket
aws s3api create-bucket --bucket <bucketname> --region us-east-1 - grab your environment variables from cli
env | grep ^AWS - What is the policies attached to that user
aws iam list-attached-user-policies --user-name=$AWS_ACCOUNT_USERNAME - Create iam user
aws iam create-user --user-name root-for-vault - Attach policy
aws iam attach-user-policy --user-name root-for-vault --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/vault-root - Create access key and secret passing to txt for temp use
aws iam create-access-key --user-name root-for-vault | tee root-for-vault-keys.txt - Set default region
export AWS_DEFAULT_REGION=us-east-1 - Create VPC
aws ec2 create-default-vpc - Run EC2
aws ec2 run-instances --image-id <amiid> --instance-type <ec2type> --count 1 - List RDS
aws rds describe-db-instances - Grab metadata from instance
curl http://169.254.169.254/latest/meta-data/wget http://169.254.169.254/latest/meta-data/ - Grab userdata from instance
curl http://169.254.169.254/latest/user-data/ - List lambda functions
aws lambda list-functions --max-items 10Full list of lambda cli - Invoke Lambda
aws lambda invoke \ --function-name my-function \ --payload '{ "name": "Bob" }' \ response.json - Delete an S3 bucket and all its contents with just one command
aws s3 rb s3://bucket-name -force - Copy a directory and its subfolders from your PC to Amazon S3
aws s3 cp MYFolder s3://bucket-name -recursive [-region us-west-2] - Display subsets of all available ec2 images
aws ec2 describe-images | grep ubuntu - List users in a different format
aws iam list-users --output table - Get credentialed IAM reports from CLI
aws iam generate-credential-reportand read itaws iam get-credential-report --output text | base64 --decode >> credentialreport.csv - List the sizes of an S3 bucket and its contents
aws s3api list-objects --bucket BUCKETNAME --output json --query " [sum(Contents[].Size), length(Contents[])]" - Move S3 bucket to a different location
aws s3 sync s3://oldbucket s3://newbucket --source-region us-west-l --region us-west-2 - sync files from local but exlude some directories and .pem
aws s3 sync <YOURLOCALPATH> s3://<YOURBUCKETNAME> --exclude 'scripts/*' --exclude '*.pem' - List users by ARN
aws iam list-users --output json | jq -r .Users[].Arn - List all of your instances that are currently running
aws ec2 describe-instances --filters Name=instance-state-name,Values=running --query 'Reservations[*].Instances[].[InstanceId,State,PublicIpAddress, Tags[?Key==Name].Value]' --region us-east-1 --output json | jqaws ec2 describe-instances --filters Name=instance-state-name,Values=running --region us-east-1 --output table - start ec2 instances
aws ec2 start-instances --instance-ids <your instance id> - describe your sg rules
aws ec2 describe-security-group-rules - Other ways to pass input parameters to the AWS CLI with JSON
aws iam put-user-policy --user-name AWS-Cli-Test --policy-name Power-Access --policy-document { "Statement":[{ "Effect": "Allow" , "NotAction":"iam:*", "Resource": "*"} ] } - When backups complete send to sns topic
aws backup put-backup-vault-notifications --endpoint-url https://backup.eu-west-1.amazonaws.com --backup-vault-name examplevault --sns-topic-arn arn:aws:sns:eu-west-1:111111111111:exampletopic --backup-vault-events BACKUP_JOB_COMPLETED - Get backups notifications
aws backup get-backup-vault-notifications --backup-vault-name examplevault - Assume into role
aws sts assume-role --role-arn arn:aws:iam::<accountnumber>:role/<rolename> --role-session-name readcross --profile <name> - Describe instance types
aws ec2 describe-instance-types --query 'sort_by(InstanceTypes, &InstanceType)[].InstanceType' --profile <name> --region us-east-1 - Describe AMIs
aws ec2 describe-images --owners self --output json --profile <name> --region us-east-1 | jq .Images[].ImageId
Expand
- Tutorial Dojo Study Guide and Cheat Sheets
- Notes Doc
- Great Medium Article on how to crack it
- AWS Whitepapers
- AWS Well-Architected
- AWS Training and Free Training
- Acloudguru training
- Udemy course
- Free Practice Exams ,25 questions and Dump Practice Exams and $20 Practice Exam Thru Cert Center
- Amazing 6 practice exams $29.99 with video and thorough study guides/explanations
- Flashcards
- Main Cert Page
- AWS Cert Center
- All Official AWS Cert Past Exams
- AWS Cert Prep Center
- All FAQs
- CLI S3 cheat sheet
- Boto3 SNS Doc
- EFS vs S3 Cheat Sheet
- Aurora Custom Endpoints
- Aurora Global Database
- Aurora Serverless DB
- MQ
- Nat Gateway Comparison
- Aurora plus lambda
Expand
-
NACL default open but if you restrict you'll need set ephermal ports. Remember if meets rule first it be followed!

-
Subnets: Below are the important points you have to remember about subnets:
-
Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B.
-
You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. Periodically = spot instance
-
elastic cache and dynamo db store session mgmt
-
ElastiCache improves the performance of your database through caching query results.
-
iam role plus ad connector (when in doubt IAM is your friend)
-
IAM roles are global so need need create new ones
-
identity store is not compatible with SAML 2.0 then you can build a custom identity broker application using STS
-
Cookies (keep url) and signed URL if don't mind diff url. If don't want use s3 links use cookies and Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket.
-
EMR = big data / analyze
-
Route 53 failover (active active = majority) Two types of failover configurations
- Active-Active Failover – all the records that have the same name, the same type, and the same routing policy are active unless Route 53 considers them unhealthy. Use this failover configuration when you want all of your resources to be available the majority of the time.
- Active-Passive Failover – use this failover configuration when you want a primary resource or group of resources to be available the majority of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the healthy primary resources.
-
S3 web hosting cheaper
-
s3 Expedited retrievals allow you to quickly access your data when occasional urgent requests and Provisioned capacity ensures that your retrieval capacity for expedited retrievals
-
Know the cost One Zone IA cheaper than IA and intelligent tiering more expensive then IA

-
An Amazon S3 Glacier (Glacier) vault can have one resource-based vault access policy and one Vault Lock policy attached to it. A Vault Lock policy is a vault access policy that you can lock. Using a Vault Lock policy can help you enforce regulatory and compliance requirements.
-
Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, enabling you to retrieve large amounts, even petabytes, of data within 5 – 12 hours. You can specify an absolute or relative time period (including 0 days) after which the specified Amazon S3 objects should be transitioned to Amazon Glacier.
-
When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
- Data at rest inside the volume
- All data moving between the volume and the instance
- All snapshots created from the volume
- All volumes created from those snapshots
-
parameter store ecs + doesn't by default rotate keys. Secrets manager if enabled rotates keys.
-
A and AAAA = route53 aliases to alb
-
Access logging is an optional feature of Elastic Load Balancing that is disabled by default.
-
"Open source" container = EKS
-
AWS Storage Gateway storage solutions, only file gateway can store and retrieve objects in Amazon S3 using the protocols NFS and SMB.
-
when bandwith slow and need 250TB use snowball. As a rule of thumb, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consider using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transferring your data and need to transfer 100 TB of data, it takes more than 100 days to complete data transfer over that connection. You can make the same transfer by using multiple Snowballs in about a week.

-
dynamo issues check shard due to dynamo autoscales automatically UNLESS created by CLI

-
SQS 1min to 14 day retention with 120K standard queue limit and 20k fifo
-
Only FIFO queues can preserve the order of messages and not standard queues. FIFO provides exact one delivery. Standard at least once. Know when use SWF

-
DLM and backups for auto snapshots
-
DDOS prevention >Create a rate-based web ACL rule using AWS WAF and associate it with Amazon CloudFront.
-
cross site scripting and sql injection enabled in waf
-
Autoscale cooldown period = 300sec and new launch config only for changing AMI (target groups choose what instances)
-
Oldest configured ec2 unless and autoscaling configs CAN"T be modified

-
Type of Scaling policies in Autoscaling (note: the fourth nonscaling is scheduling)

-
Decouple services with swf and sqs
-
Cloud formation: Use the CreationPolicy attribute when you want to wait on resource configuration actions before stack creation proceeds. Using
cfn-signalto point to resource -
Redshift = bigdata and business intelligence analytics
-
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools (e.g. splunk, elk,s3)
-
Cloud watch agent more efficient than using SSM agent
-
Cloudtrail needs = Enable server access logging for all required Amazon S3 buckets
-
**By default, CloudTrail event log files are encrypted **using Amazon S3 server-side encryption (SSE).
-
X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services.
-
An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud.
- Elastic Network Adapter (ENA). It supports network speeds of up to 100 Gbps for supported instance types. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to support VPC networking.
- An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with additional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality.
- The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities.
-
**CloudFront **it doesn't have the capability to route the traffic to the closest edge location via an Anycast static IP address.
-
AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. As you grow the number of workloads running on AWS, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth.

-
VPC endpoints are region-specific only and do not support inter-region communication. When set up Re-configure the route table’s target and destination of the instances’ subnet.
-
Data migration tool for heterogeneous migrations a two step process. First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type conversions will automatically be done by the AWS Database Migration Service during the migration.
-
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console.




















































