-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathaws
More file actions
1784 lines (1093 loc) · 110 KB
/
aws
File metadata and controls
1784 lines (1093 loc) · 110 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1 * I have an application, how you will setup a VPC ? what all the components you use ?
How much servers needed
how much public and private subnets
public and private routing tabels
internet gateway
nat gateway
nacl
security group
============================================================================================================================================
2 * Where you will set the Route Table ?
In vpc
============================================================================================================================================
3 * Services used in AWS ?
ec2,ebs,iam,cloudwatch,cloudtrail,s3,route53,elb,
============================================================================================================================================
4 * How can you make any S3 bucket as private ?
while creating the bucket we can able to check the block all public access
============================================================================================================================================
5 * Cross region replication ? how you apply it ? will it copy automatically ?
cross region replication it's the configuration of source and destination bucket so the objects will be coppied from one bucket to another bucket in different region
for that we need to have the source and destination bucket with version control enabled
and we need to attach the role to s3 bucket with following actions
roles policy for source bucket
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionAcl",
"s3:ListBucket",
"s3:GetReplicationConfiguration",
"s3:GetObjectVersion"
roles policy for destination
replicateObkect
replicatetags
replicatedelete
Yes it will copy automatically.
============================================================================================================================================
6 * S3 transfer accereration ?
Amazon S3 Transfer Acceleration is a feature that enables faster transfers of files over the internet to and from Amazon S3 buckets. It uses Amazon CloudFront's globally distributed edge locations to accelerate transfers over the public internet.
With S3 Transfer Acceleration, you can achieve faster file uploads and downloads, particularly for large files or over long distances, by reducing the time it takes to transfer data over the public internet. This can be particularly useful for global organizations or for transferring data over unreliable network connections.
============================================================================================================================================
7 * Read replica
In AWS, a read replica is a copy of a source database instance that is asynchronously replicated to a target database instance. The target instance is used for read operations only, while the source instance continues to handle both read and write operations. Read replicas are often used in scenarios where the read traffic to a database is much higher than the write traffic, and scaling the source instance vertically may not be cost-effective or practical.
Read replicas in AWS support several types of databases, including Amazon Aurora, Amazon RDS for MySQL, PostgreSQL, MariaDB, and Oracle. When you create a read replica, the target instance is created from a snapshot of the source instance's data, and then data changes are asynchronously replicated from the source instance to the target instance. The target instance can be located in the same region as the source instance or in a different region, which can be useful for disaster recovery or for serving users in different geographic regions.
Some benefits of using read replicas in AWS include:
Improved performance: Read replicas can offload read traffic from the source instance, which can improve the performance of the source instance and reduce latency for read requests.
Scalability: Read replicas can be used to scale read capacity horizontally without incurring the cost of scaling up the source instance.
High availability: Read replicas can be used to provide high availability for read requests, as they can continue to serve read requests even if the source instance fails or becomes unavailable.
Cost-effective: Read replicas can be created and terminated as needed, which can help reduce costs by only paying for the resources used when they are needed.
In summary, read replicas in AWS are a useful tool for improving performance, scalability, and high availability for read-heavy workloads.
============================================================================================================================================
8 * Suppose you create Auto scaling group and in launch conf you specify as 3 instances, then the all the 3 instances are running in full capacity ?
When you create an Auto Scaling Group (ASG) in AWS and specify a launch configuration that includes three instances,
the ASG will launch three instances initially.
However, whether these instances run at full capacity or not depends on a few factors, such as the specifications of the instances,
the workload being run on them, and the configuration of the application or service running on them.
For example, if you launch three instances that have only 1 vCPU and 1 GB of memory each,
running an application that requires a lot of CPU and memory resources may cause these instances to
run at full capacity or even become overwhelmed. On the other hand, if you launch three instances that have 4 vCPUs and 16 GB of memory each,
and run a workload that does not require a lot of resources, these instances may not run at full capacity.
In addition, AWS provides various scaling policies that allow you to automatically adjust the number of instances in an ASG based
on metrics such as CPU utilization, network traffic, or other custom metrics. If your workload increases,
these policies can launch additional instances to handle the increased load, while if the workload decreases,
they can terminate instances to reduce costs.
Therefore, whether the instances in your ASG run at full capacity or not depends on many factors,
and AWS provides various tools and policies to help you optimize your infrastructure's performance and costs.
============================================================================================================================================
9 * will the applciation fails or it will be running ? Do you think AWS will provide new instance ?
By default, Auto Scaling will continue to launch new instances even if the application is not running or is unresponsive.
This is because Auto Scaling is designed to operate independently of the application and can only adjust the number
of instances based on the metrics or triggers specified in the Auto Scaling policy.
However, it's important to ensure that your Auto Scaling policy is configured correctly to avoid launching instances unnecessarily
or causing issues with your application.
For example, you can configure your Auto Scaling policy to use health checks to monitor the status of your application and
to stop launching instances if the application is not responding.
============================================================================================================================================
10 * How to you manage credentials ?
In AWS, there are different ways to manage credentials depending on your use case and the services you are using. Here are some common methods:
IAM Users: AWS Identity and Access Management (IAM) allows you to create users and groups with specific permissions to access AWS resources. You can create IAM users with programmatic access (access keys) or console access (passwords).
IAM Roles: IAM roles are similar to users, but they don't have permanent credentials. Instead, you can assume a role temporarily to access AWS resources. This can be useful when you want to give access to a third-party service or application without sharing your AWS credentials.
AWS Secrets Manager: Secrets Manager is a service that helps you protect and manage secrets such as passwords, database credentials, and API keys. You can use Secrets Manager to store, rotate, and retrieve credentials for your applications and services.
AWS SSO: AWS Single Sign-On (SSO) is a service that makes it easy to centrally manage access to multiple AWS accounts and business applications. With SSO, you can use your existing credentials (such as Microsoft Active Directory) to access AWS and other cloud applications.
AWS CLI: The AWS Command Line Interface (CLI) is a tool that allows you to interact with AWS services from the command line. You can configure the CLI with your AWS access keys to authenticate and access AWS resources.
Regardless of the method you choose, it's important to follow security best practices such as using strong passwords, rotating credentials regularly, and limiting access to only the resources that are necessary.
============================================================================================================================================
11 * If credentials icon is not available? What you will do?
If the credentials icon is not available, it might be because you are not logged in to the account,
or you do not have the necessary permissions to view or modify the credentials.
Here are some steps you can take to troubleshoot the issue:
Check if you are logged in to the correct account: Make sure you are logged in to the account where the credentials are stored.
If you have multiple accounts, check if you are logged in to the right one.
Verify your permissions: Check if you have the necessary permissions to view or modify the credentials.
If you do not have the required permissions, you may need to contact the administrator or owner of the account to grant you access.
Refresh the page: Sometimes, the page may not load correctly due to a temporary glitch or network issue.
Try refreshing the page to see if the credentials icon appears.
Clear your browser cache: Clearing your browser cache and cookies may help resolve the issue.
Try clearing your cache and cookies and then log in again to see if the credentials icon appears.
Contact AWS support: If none of the above steps work, you can contact AWS support for further assistance.
In summary, if the credentials icon is not available, you should verify your account login,
permissions, and refresh the page or clear your browser cache. If the issue persists, you can contact AWS support for further assistance.
============================================================================================================================================
12 * You are working on EC2 and that goes down and how do you resolve this ?
If an EC2 instance goes down, there are several steps you can take to resolve the issue:
Check the instance status: First, check the status of the instance in the EC2 console.
If the instance is in the "stopped" state, you can simply start the instance by right-clicking on it and selecting "start".
If the instance is in the "running" state but is not responding, proceed to the next step.
Check the system logs: Check the system logs to see if there are any error messages or other indicators of what caused the instance to go down.
You can access the system logs through the EC2 console by selecting the instance and clicking on the "Actions" dropdown menu,
and then selecting "Get System Log".
Restart the instance: If you are unable to determine the cause of the issue, or if the logs indicate that the issue is not easily fixable,
you can try restarting the instance. This may help resolve the issue in some cases.
To restart the instance, right-click on it and select "Instance State", then "Stop".
Wait a few seconds and then select "Instance State", then "Start".
Launch a new instance: If restarting the instance does not resolve the issue, or if the instance cannot be started,
you can launch a new instance and migrate any data or applications from the old instance to the new one. To do this,
create a new EC2 instance from a backup or image of the old instance, attach any necessary volumes or storage,
and configure the instance to use the same IP address or DNS name as the old instance.
Contact AWS support: If none of the above steps work, or if the issue is more complex,
you may need to contact AWS support for further assistance. AWS provides support services to help troubleshoot and
resolve issues with EC2 instances.
In summary, if an EC2 instance goes down, you can try checking the instance status, system logs, and restarting the instance.
If these steps do not resolve the issue, you can launch a new instance or contact AWS support for further assistance.
============================================================================================================================================
13 * You want to give a access to a EC2 and you have only pvt key and how do you enable him to access the EC2 ?
If you want to give someone access to an EC2 instance and you only have the private key, you can create a new user in the instance's operating system and provide the private key to that user. Here are the steps you can follow:
1.Connect to the EC2 instance using SSH and the private key:
2.Create a new user by running the following command as the root user:
3.Set a password for the new user:
4.Add the new user to the sudoers group so that they can perform administrative tasks:
5.Copy the private key to the new user's home directory:
6.Test the new user's SSH access by disconnecting from the EC2 instance and reconnecting using the new user's credentials:
============================================================================================================================================
14 * EKS ?
============================================================================================================================================
15 * VPC ? suppose I have a server, need to download the packages from the internet, but sever should not be accessed directly from outside ? how you can do that
In Amazon Web Services (AWS), a Virtual Private Cloud (VPC) is a virtual network that provides a private and isolated section of the AWS Cloud.
It allows you to launch AWS resources, such as EC2 instances and RDS databases, in a defined virtual network that you have complete control over
, including the ability to configure IP addresses, subnets, and routing tables.
So your server is need to download the packages from the internet but not be accessed directly from outside then we can create an NAT gateway
using public subnet and we need to attach this nat gateway with private route which had private subnet association so in this way we can able to
provide access to the internet without making it access publically.
============================================================================================================================================
16 * 2 AWS accounts, my EC2 of 1 account should talk to my another EC2 ? How ?
To allow EC2 instances in one AWS account to communicate with EC2 instances in another AWS account, you can use VPC peering. VPC peering allows you to connect two VPCs (one in each account) together so that instances can communicate with each other as if they were on the same network.
Here are the high-level steps to set up VPC peering between two AWS accounts:
Create a VPC in each AWS account if you haven't already done so.
Create a VPC peering connection in the first AWS account and specify the VPC ID of the second AWS account.
Accept the VPC peering connection in the second AWS account.
Update the route tables in both VPCs to allow traffic to flow between them.
Configure the security groups in both VPCs to allow the necessary traffic.
After completing these steps, instances in both VPCs should be able to communicate with each other using their private IP addresses.
It's important to note that VPC peering is only available between VPCs in the same AWS region, so you'll need to make sure that both VPCs are in the same region. Additionally, you'll need to ensure that there are no overlapping IP addresses between the two VPCs, as this can cause routing issues.
============================================================================================================================================
17 * IAM ? I have 10 instances, how do you give access to user for only 5 resources ?
To give a user access to only 5 out of 10 instances, you can use AWS Identity and Access Management (IAM) to create a custom policy that specifies the resources the user is allowed to access.
Here are the high-level steps to create a custom IAM policy for this scenario:
Identify the IAM user or group that you want to grant access to.
Create a custom policy that allows access to the specific resources you want to grant access to.
For example, if you want to grant access to only 5 out of 10 instances,
you can create a policy that allows access to the specific instance IDs of those 5 instances.
The policy can also specify which actions the user is allowed to perform on those instances, such as starting or stopping the instances.
Attach the custom policy to the IAM user or group that you want to grant access to
============================================================================================================================================
18 * Cross region Route53 ?
Cross-region Route 53 is a feature of Amazon Route 53,
which is Amazon's highly available and scalable cloud Domain Name System (DNS) service. It allows you to route traffic to resources that are located in different AWS regions.
With cross-region Route 53, you can create a global DNS infrastructure that can route traffic to your AWS resources located in different regions based
on latency, health, or geographic location. This means that you can have a global presence for your application, with low latency and high availability,
by distributing traffic across multiple regions.
============================================================================================================================================
19 * Health checks in Route53
Health checks in Amazon Route 53 are a feature that enables you to monitor the health and performance of your resources, such as web servers or load balancers.
A health check is a periodic probe of a resource's status, typically conducted at regular intervals, to ensure that it's operating correctly and efficiently.
Route 53 health checks can be configured to monitor endpoints, such as URLs, IP addresses, or other DNS records, and can be used to determine
whether the endpoint is healthy or unhealthy. If an endpoint fails a health check, Route 53 can automatically route traffic to a healthy endpoint,
which can help to minimize downtime and improve the availability of your applications.
============================================================================================================================================
20 * DualStack configuration in Route53
DualStack is a feature of Amazon Route 53 that enables you to use both IPv4 and IPv6 addresses for your resources.
IPv6 is the next generation of the Internet Protocol (IP) and provides a much larger address space than IPv4, which is the current version of IP.
With DualStack, you can create DNS records that contain both IPv4 and IPv6 addresses for your resources, such as web servers or load balancers.
When a client queries the DNS records, Route 53 returns both the IPv4 and IPv6 addresses, if available, to the client. The client then selects the appropriate address to use based on its own capabilities and preferences.
To configure DualStack in Route 53, you need to create DNS records for your resources that contain both IPv4 and IPv6 addresses.
You can do this using the Route 53 console, API, or command line tools.
When you create the DNS records, you specify the IP addresses for each protocol and Route 53 automatically associates them with the appropriate record sets.
============================================================================================================================================
21 * what is S3 and why customer choose S3 ?
Amazon S3 (Simple Storage Service) is a cloud-based storage service provided by Amazon Web Services (AWS) that allows users to store and retrieve data from anywhere on the web. S3 provides scalable, secure, durable, and highly available object storage infrastructure that enables customers to store and retrieve any amount of data, at any time, from anywhere.
There are several reasons why customers choose S3:
Scalability: S3 provides virtually unlimited storage capacity, allowing customers to store any amount of data they need without worrying about capacity planning or infrastructure limitations.
Durability: S3 is designed for 99.999999999% durability, meaning that objects stored in S3 are protected against data loss due to hardware failures, natural disasters, or other events.
Accessibility: S3 provides a simple web services interface that enables customers to store and retrieve data from anywhere on the web, making it easy to access data from a variety of applications and services.
Security: S3 provides several features to help customers secure their data, including encryption, access controls, and bucket policies that allow customers to define access permissions at a granular level.
Cost-effective: S3 provides a pay-as-you-go pricing model that allows customers to only pay for the storage they use, without any upfront costs or long-term commitments.
Integration: S3 integrates with a wide range of AWS services and third-party applications, making it easy to build complex data-intensive applications and workflows.
Overall, S3 is a powerful storage service that provides customers with a reliable, scalable, and cost-effective way to store and retrieve any amount of data from anywhere on the web.
============================================================================================================================================
22 * When the jar is created, how will you push it to S3?
To push a JAR file to Amazon S3, you can use the AWS Management Console, AWS CLI, or AWS SDKs.
Here are the high-level steps for pushing a JAR file to S3 using AWS CLI:
Install and configure AWS CLI on your local machine or EC2 instance.
Create an S3 bucket if you haven't already done so.
Create a folder in the S3 bucket where you want to store the JAR file.
Use the "aws s3 cp" command to push the JAR file to the S3 bucket and folder. For example, the command might look like this:
command: aws s3 cp myjar.jar s3://mybucket/myfolder/
This command copies the "myjar.jar" file to the "myfolder" folder in the "mybucket" S3 bucket.
Verify that the JAR file was uploaded to S3 by checking the S3 Management Console or running the "aws s3 ls" command.
Note that you may also want to configure access controls for the S3 bucket and folder to restrict access to the JAR file as needed.
============================================================================================================================================
23 * dynamoDB
============================================================================================================================================
24 * vpc architecture
The VPC (Virtual Private Cloud) architecture in Amazon Web Services (AWS) consists of several key components that work together
to create a secure and isolated network environment within the AWS cloud. Here is a brief overview of the main components of a VPC architecture:
VPC: A VPC is the primary building block of the AWS networking architecture.
It is a virtual network that is logically isolated from other virtual networks in the AWS Cloud.
You can think of it as a virtual data center in the cloud.
Subnets: Subnets are logical partitions of a VPC that allow you to isolate resources within the VPC.
You can create one or more subnets within a VPC to segment resources into different groups based on their function,
security requirements, or other criteria.
Route tables: Route tables are used to define the routing rules for traffic within the VPC.
You can create one or more route tables within a VPC and associate them with subnets to control how traffic flows between resources.
Internet Gateway: An Internet Gateway (IGW) is a horizontally scaled, redundant, and highly available VPC component that
allows traffic between the VPC and the internet. An IGW enables resources within a VPC to communicate with resources outside the VPC.
Network Access Control Lists: Network Access Control Lists (NACLs) are a set of rules that control
inbound and outbound traffic at the subnet level.
NACLs act as a firewall for subnets and allow you to restrict or allow traffic based on IP address, protocol, or port number.
Security Groups: Security Groups act as a virtual firewall for individual resources,
such as EC2 instances or RDS databases, within a VPC. You can use security groups to control inbound and outbound traffic
to and from resources based on IP address, protocol, or port number.
Virtual Private Gateway: A Virtual Private Gateway (VGW) is a device that allows you to establish a secure VPN connection between your
on-premises network and your VPC. You can use a VGW to extend your on-premises network to the AWS Cloud.
These components work together to create a secure and isolated network environment within the
AWS Cloud that can be customized to fit your specific needs.
============================================================================================================================================
25 * why we need ELB & how you select which type of loadbalancer suitable for my application.
Selecting the appropriate type of load balancer for your application depends on several factors such as the type of traffic, the architecture of your application, and the desired level of control and configuration.
Here is a brief overview of the different types of load balancers available on AWS and their characteristics:
Application Load Balancer (ALB): This type of load balancer operates at Layer 7 (the application layer) and is ideal for distributing HTTP/HTTPS traffic. ALBs can route traffic based on advanced request routing rules, support host and path-based routing, and integrate with AWS services such as AWS WAF and AWS Lambda. ALBs also support WebSocket and HTTP/2 traffic.
Network Load Balancer (NLB): This type of load balancer operates at Layer 4 (the transport layer) and is ideal for handling TCP, UDP, and TLS traffic. NLBs can route traffic based on IP protocol data, TCP/UDP port, or source IP, and can handle millions of requests per second with ultra-low latencies.
Classic Load Balancer (CLB): This is the legacy load balancer on AWS and supports both Layer 4 and Layer 7 traffic. CLBs can route traffic based on basic routing rules such as round-robin and session stickiness. However, it lacks some of the advanced features and capabilities of ALBs and NLBs.
When selecting a load balancer for your application, consider the following factors:
Traffic type: Determine the type of traffic that your application receives and whether it is HTTP, HTTPS, TCP, or UDP traffic.
Application architecture: Consider the architecture of your application, whether it is a monolithic or microservices-based architecture, and whether it requires advanced routing rules and features.
Scalability requirements: Determine the expected level of traffic and whether the load balancer can handle the expected load.
Security requirements: Consider whether the load balancer integrates with AWS services such as AWS WAF and AWS Certificate Manager to provide secure and encrypted traffic.
********************************************************************************************************************************************************************************
Overall, ALBs are the recommended load balancer for most HTTP/HTTPS-based applications,
NLBs are recommended for high-performance, low-latency applications, and CLBs are recommended for legacy applications or for those that do not require advanced features.
*******************************************************************************************************************************************************************************************
============================================================================================================================================
26 * crosszone Loadbalancer
Cross-zone load balancing is a feature of Elastic Load Balancing (ELB) that evenly distributes incoming traffic across all healthy registered instances in all Availability Zones associated with a load balancer.
When this feature is enabled, the ELB will distribute traffic across all healthy instances in all Availability Zones, regardless of which zone the ELB is currently in. This means that if an instance in one zone becomes unhealthy, the ELB will continue to route traffic to healthy instances in other zones.
The cross-zone load balancing feature provides better availability and fault tolerance for applications running on Amazon Web Services (AWS) because it ensures that all healthy instances receive traffic, regardless of their zone. Without cross-zone load balancing, ELB will only distribute traffic to instances in the same Availability Zone as the ELB, which can result in uneven traffic distribution and potentially impact the availability of the application.
By default, cross-zone load balancing is enabled for Application Load Balancers (ALBs) and Network Load Balancers (NLBs) on AWS. For Classic Load Balancers (CLBs), cross-zone load balancing can be enabled or disabled. It is recommended to keep cross-zone load balancing enabled for CLBs as well to ensure optimal application performance and availability.
============================================================================================================================================
27 * cloudwatch & cloudtrail
CLOUDWATCH:
Services (AWS) that allows you to monitor resources and applications in the AWS Cloud in real-time.
It provides you with metrics, logs, and alarms to help you keep track of the performance and health of your resources and applications,
and enables you to take action to resolve issues and optimize performance.
CloudWatch collects and stores metric data, which are time-stamped values that represent the performance of your resources and applications.
These metrics can be generated by AWS services, such as EC2 instances and RDS databases, or custom metrics that you define.
In addition to metrics, CloudWatch also collects and stores log data, which are text-based records of events and messages
generated by resources and applications. You can use CloudWatch Logs to store, monitor, and analyze log data, and to generate insights
and alerts based on specific patterns or conditions.
CloudWatch also provides you with the ability to set alarms based on metric data, which can trigger automated actions or notifications
when specific thresholds or conditions are met. For example, you can create an alarm that triggers an action if the CPU utilization
of an EC2 instance exceeds a certain percentage for a certain period of time.
Some key features of CloudWatch include:
Real-time monitoring: CloudWatch provides you with real-time metrics and logs to help you keep track of
the performance and health of your resources and applications.
Customizable dashboards: You can create customizable dashboards to display the metrics and logs that are most important to you.
Automation and notifications: CloudWatch enables you to set alarms based on metric data, and trigger automated actions or notifications
when specific thresholds or conditions are met.
Integration with other AWS services: CloudWatch can be integrated with other AWS services, such as EC2, RDS, and Lambda,
to provide you with a comprehensive view of your AWS resources and applications.
In summary, CloudWatch is a monitoring and observability service that enables you to monitor and manage the performance and
health of your resources and applications in the AWS Cloud, and take action to optimize performance and resolve issues.
CLOUDTRAIL:
CloudTrail is a global service that is enabled by default in all AWS regions.
It can record activity for multiple AWS accounts and can send the data to multiple S3 buckets or CloudWatch Logs in multiple regions.
CloudTrail logs are stored in an S3 bucket and can be searched using the S3 inventory feature.
There are 3 types of events it will record
management event
data event
insightfull event
CloudTrail enables you to do the following:
Governance, Risk, and Compliance (GRC) Auditing: You can use CloudTrail to provide an audit trail of activity for compliance and security audits.
It can be used to identify which user made an API call and when,
which resource was used, what was the request, and what was the response.
Operational troubleshooting and root-cause analysis: CloudTrail enables you to identify and troubleshoot operational issues within your
AWS environment. You can use the recorded data to identify the source of operational
problems or identify the root cause of operational issues.
Security analysis: CloudTrail enables you to detect security threats by providing a comprehensive view of user and resource activity
in your AWS account. You can use CloudTrail to monitor and alert on specific activities that indicate a security breach.
Resource Change Tracking: CloudTrail enables you to track the changes to your AWS resources, including who made the changes and
when they were made.
============================================================================================================================================
28 * How many aws account you have managed?
29 * How many aws account you have managed?
============================================================================================================================================
30 * client has different data centers on different locations. if your client has 50 aws accounts now there is a requirement to establish a connectivity so that the application which is hosted in data centers servers can easily communicate with applications (or) resources which are hosted on different vpc and diff aws accounts so how you are going to establish the connection
One option is to use AWS Direct Connect, which is a dedicated network connection that provides a high-bandwidth, low-latency link between the data center and AWS. With AWS Direct Connect, the client can establish a private virtual interface (VIF) between the data center and the VPCs in different AWS accounts. This allows for direct, secure connectivity between the on-premises network and the AWS environment.
Another option is to use VPN connections between the on-premises data center and the VPCs in different AWS accounts. This allows for encrypted traffic to flow between the on-premises network and the AWS environment over the public internet.
Additionally, the client can use AWS Transit Gateway, which is a service that simplifies network connectivity between VPCs and VPNs across different AWS accounts. With Transit Gateway, the client can create a central hub that acts as a transit point for all VPC and VPN traffic, which allows for easier management and scaling of the network architecture.
Ultimately, the specific solution will depend on the client's requirements for security, performance, scalability, and cost. It is important to carefully evaluate each option and choose the one that best meets the needs of the client's application and network architecture.
============================================================================================================================================
31 * customer has 50 (or) 100 odd aws accounts they are looking for a solution so that each and every single vpc can communicate with each other how you are going to establish that connectivity?
To establish connectivity between multiple VPCs across different AWS accounts, there are several options available, depending on the specific requirements of the customer's application and network architecture. Here are some of the common options:
VPC peering: VPC peering allows the customer to connect two VPCs together using a direct network route, which enables the VPCs to communicate with each other as if they were in the same network. VPC peering can be established between VPCs in the same or different AWS accounts.
Transit Gateway: AWS Transit Gateway is a fully-managed service that allows the customer to connect VPCs and VPNs across multiple accounts and regions. With Transit Gateway, the customer can create a central hub that acts as a transit point for all VPC and VPN traffic, which allows for easier management and scaling of the network architecture.
VPN connections: The customer can use VPN connections to establish secure connectivity between VPCs across different AWS accounts. This allows for encrypted traffic to flow between the VPCs over the public internet.
AWS PrivateLink: AWS PrivateLink is a service that enables the customer to access services hosted on other VPCs or AWS services over a private connection. This can be useful for securely accessing services across multiple VPCs in different AWS accounts.
Ultimately, the specific solution will depend on the customer's requirements for security, performance, scalability, and cost. It is important to carefully evaluate each option and choose the one that best meets the needs of the customer's application and network architecture.
============================================================================================================================================
32 * customer has a physical side production environment (or) workloads are running on aws there is 1 particular Ip/CIDR which
is continously hitting their environment multiple times in a day they are suspecting that a malicious activity
they are asking you to block any kind of a traffic from a Ip/CIDR how you will going to block them?
To block traffic from a specific IP address or CIDR block, you can use AWS Security Groups and
Network ACLs (NACLs) in combination with a Web Application Firewall (WAF) to filter the traffic and deny access.
Here are the steps to block traffic from a specific IP address or CIDR block:
Identify the IP address or CIDR block that is causing the issue.
Create a new Security Group or modify an existing one to block traffic from the IP address or CIDR block.
You can do this by adding an inbound or outbound rule that denies traffic from the specified IP address or CIDR block.
If you are using a Network ACL, create a new rule to block traffic from the IP address or CIDR block.
You can do this by adding a rule that denies traffic from the specified IP address or CIDR block.
Implement a WAF to filter the traffic and block requests from the specified IP address or CIDR block.
You can use AWS WAF to create rules to block traffic from specific IP addresses or CIDR blocks.
Monitor the traffic to ensure that the IP address or CIDR block is no longer able to access your environment.
By using a combination of these security measures, you can effectively block traffic from a specific IP address or CIDR block
and prevent further malicious activity.
============================================================================================================================================
33 * CIDR for 15 ec2 instances , 2 subnets 1 private and 1 public , write CIDR ranges
vpc - 10.0.0.0/28
pubsubnet- 10.0.0.0/29
privatesubnet- 10.0.0.8/29
============================================================================================================================================
34 * you have created one server in a private subnet now you have given the Ip address to me. I am the person who will install some softwares
(or) configurations on that server. I am siting infront of my laptop i am complaining you that
i am not able to access the machinefrom my laptop how you can resolve it?
For this case i will create an user in public subnet instances and provide him that credentials and tell him to access using this instances
by ssh so he can able to access that instances and he can able to download and install software since i have added the NAT gateway to that
private subnet so he won't get any issue with downloading the packages from internet
============================================================================================================================================
35 * how you can access a machine without a .pem key?
By default, when you create an Amazon EC2 instance, you are required to specify a .pem key pair for secure SSH access to the instance.
However, if you don't have access to the .pem key pair, there are a few options available to access the instance.
Reset the key pair: You can reset the key pair associated with your instance through the AWS Management Console or using the AWS CLI.
This will allow you to specify a new .pem key pair, and then use that key pair to access the instance.
Use session manager: If you have configured AWS Systems Manager Session Manager on your EC2 instance,
you can use it to access the instance without a .pem key. Session Manager provides a secure and
auditable way to access your instances without the need for SSH keys. You can connect to your instance
using the AWS Management Console or the AWS CLI.
Mount the root EBS volume: You can detach the root EBS volume from the instance and attach it to another instance.
Once attached, you can mount the volume and access the file system to modify or remove the .pem key pair.
After making the necessary changes, you can reattach the volume to the original instance and access it using the new key pair.
============================================================================================================================================
36 * if you create a transit gateway what exactly you will define in attachments?
AWS Transit Gateway is a managed service that simplifies the network connectivity between Amazon Virtual Private Clouds (VPCs),
on-premises networks, and other cloud resources. It acts as a hub-and-spoke architecture that enables you to connect your VPCs and
other resources to your Transit Gateway, and route traffic between them in a centralized and efficient way.
Transit Gateway provides a scalable and highly available solution that allows you to easily manage network connectivity across
multiple accounts and regions. It supports VPN, Direct Connect, and Transit Gateway Peering connections, enabling you to establish
secure and private network connections between your VPCs and your on-premises resources.
When you create a Transit Gateway in AWS, you need to define the attachments, which are the connections between your Transit Gateway and your network resources such as VPCs or VPN connections. The attachments are defined in the following way:
Define the attachment type: You need to specify the type of attachment you want to create. You can choose from several attachment types including VPC, VPN, Direct Connect, and Transit Gateway Peering.
Specify the attachment details: Depending on the attachment type you choose, you need to provide the necessary details for the attachment. For example, if you are attaching a VPC, you need to specify the VPC ID and the subnet(s) in which the Transit Gateway will be connected.
Define the routing: Once you have defined the attachment, you need to specify the routing rules that will be used to route traffic between the attachment and the Transit Gateway. You can use the AWS Management Console, CLI or APIs to define the routing.
Configure security: You can also configure security settings for your attachments using security groups and Network Access Control Lists (NACLs).
By defining the attachments in this way, you can create a hub-and-spoke architecture that enables you to connect your VPCs and other resources to your Transit Gateway, and route traffic between them in a centralized and efficient way. This simplifies network management, reduces operational costs, and enhances security and compliance.
============================================================================================================================================
37 * how your sharing the key for a group of 4 members & these 4 members will be logged in from same username
after 4/5 days customer complaining that a file deleted how can you track back who did what?
If you have shared a key with a group of 4 members who log in from the same username, it may be difficult to track back who did what if a file was deleted. However, there are a few steps you can take to improve accountability and traceability:
Use individual user accounts: Instead of sharing a single username, create individual user accounts for each member of the group. This way, each user can log in using their own credentials, and their actions can be tracked and audited individually.
Enable logging and auditing: Enable logging and auditing for the system and the file in question. This will allow you to review the logs and identify who accessed the file and when.
Use version control: If the file in question is a document or code file, consider using version control software such as Git. This will allow you to track changes to the file, who made the changes, and when they were made.
Educate users: Educate the users on the importance of security and accountability. Make sure they understand the potential consequences of unauthorized access or data tampering, and encourage them to report any suspicious activity.
By taking these steps, you can improve accountability and traceability, and minimize the risk of unauthorized access or data tampering.
============================================================================================================================================
38 * what instances you are using in your project? why particular those instances?
We were using Compute-optimized instances in our project because we mainly working on high performance web server so for this case these
types of instances are very helpfull since they are highly optimized instances and they do well with high performance web servers
we are using c6i.16xlarge
64 cource of cpu
128 gb of ram
============================================================================================================================================
39 * customer is asking you to change the instances family to 2 ec2 instances which are part of the autoscalling groups siting on a
load balancer & the condition is they are not looking for new instances and none of the instances to be deleted how your going to
change the instance family of these 2 instances?
To change the instance family of the two EC2 instances without creating new instances or deleting any existing ones,
you can follow these steps:
Stop the instances: First, stop the two instances that you want to change the instance family for.
This will ensure that any running processes or services are gracefully terminated and that no data is lost.
Change the instance family: After the instances have been stopped, modify the instance type to the desired family type.
This can be done by selecting the instances from the EC2 console, clicking on the "Actions" button, and selecting "Instance Settings"
and then "Change Instance Type". Select the new instance family type from the dropdown menu and click "Apply".
Update the autoscaling group: Once the instance type has been changed, update the autoscaling group to reflect the new instance family type.
This can be done by selecting the autoscaling group from the EC2 console, clicking on the "Edit" button,
and modifying the launch configuration to use the new instance family type. Make sure to save the changes.
Start the instances: After the autoscaling group has been updated, start the two instances that were stopped in step 1.
The instances will now be launched using the new instance family type.
By following these steps, you can change the instance family of the two EC2 instances without creating new instances or
deleting any existing ones, and ensure that the autoscaling group is updated to use the new instance family type.
============================================================================================================================================
40 * what kind of individual contribution load that you have played in your experiance on aws?
what kind of issues you have resolved? what kind of troubleshooting you have performed?
============================================================================================================================================
41 * pre signed url in s3?
A pre-signed URL in Amazon S3 is a URL that allows anyone who receives the URL to perform a specific action on an S3 object,
such as downloading an object or uploading a file to a specific bucket.
The pre-signed URL is generated using your AWS credentials and includes an expiration time, after which the URL is no longer valid.
The primary use case for pre-signed URLs is to provide temporary access to private objects in S3 to clients that do not have AWS credentials.
For example, you may want to provide a pre-signed URL to a third-party vendor so that they can upload files to your S3 bucket for a
limited period of time, without giving them permanent access to your AWS resources.
To create a pre-signed URL, you can use the AWS SDK or AWS CLI, and specify the S3 bucket name, object key,
and the action you want to allow, such as GET or PUT. You can also specify an expiration time,
after which the pre-signed URL will be invalid. Once the URL is generated, you can provide it to the client who needs temporary access to the S3
object.
Using pre-signed URLs, you can control access to your S3 objects more securely and with more flexibility than
simply making them publicly accessible.
============================================================================================================================================
42 * different routing policies in aws? which policy you have used for your project?
There are several routing policies available in AWS Route 53, which include:
Simple routing policy: This is the default routing policy and is used when you have a single resource that performs a given function.
Weighted routing policy: This policy allows you to route traffic to multiple resources in proportions that you specify.
Latency-based routing policy: This policy is used when you have resources in multiple AWS regions and want to route traffic to the region
with the lowest latency.
Failover routing policy: This policy is used when you have a primary and a secondary resource and want to route traffic
to the secondary resource only if the primary resource is unavailable.
Geolocation routing policy: This policy is used when you want to route traffic to resources based on the geographic location of the request.
Multivalue answer routing policy: This policy is used when you want to respond to DNS queries with a list of potential IP addresses
for a given resource.
The routing policy used in a project will depend on the specific requirements of the application and infrastructure.
In my experience, I have used the weighted routing policy to balance traffic between multiple EC2 instances and the latency-based routing
policy to route traffic to resources in different AWS regions based on the lowest latency.
============================================================================================================================================
43 * diff b/w cname and alias?
In summary, CNAME records are used to map one domain name to another domain name,
while Alias records are used to map a domain name to an AWS resource.
Alias records can be used for the domain's apex record, while CNAME records cannot.
CNAME: A CNAME (Canonical Name) record is used to create an alias for a domain name.
It maps one domain name to another domain name.
For example, if you have a domain name www.example.com and you want to map it to another domain name, such as www.example.net,
you can create a CNAME record for www.example.com that points to www.example.net.
However, CNAME records cannot be used for the domain's apex record, such as example.com, as they cannot coexist with other record types,
such as MX, NS, SOA, and others.
Alias: An Alias record is also used to create an alias for a domain name, but it can be used for the domain's apex record.
It maps a domain name to an AWS resource, such as an Amazon S3 bucket, an Elastic Load Balancer, or an Amazon CloudFront distribution.
Alias records are preferred over CNAME records because they provide better performance and can be used for the domain's apex record.
============================================================================================================================================
44 * what record used for dns to ip?
The A record maps a domain name to one or more IPv4 addresses. When a user enters a domain name in their web browser,
the browser sends a DNS query to a DNS resolver to look up the IP address associated with the domain name.
The resolver looks up the A record for the domain name and returns the IP address to the browser,
which can then connect to the server at that IP address.
It's worth noting that there is also a similar record called AAAA (or "quad-A") record,
which is used to map a domain name to one or more IPv6 addresses.
============================================================================================================================================
45 * any experiance with lambda? any experiance inwriting lambda functions?
AWS Lambda is a serverless compute service provided by Amazon Web Services (AWS).
It enables users to run code without provisioning or managing servers, and pay only for the compute time consumed by the code.
With AWS Lambda, users can upload their code as a function and configure it to trigger automatically in response to specific events,
such as changes to data in Amazon S3, updates to a DynamoDB table, or API Gateway requests.
The Lambda service takes care of automatically scaling the resources required to run the code in response to the incoming request volume.
Lambda supports several programming languages including Node.js, Python, Java, C#, Go, and Ruby.
It also provides integration with other AWS services, making it easy to build serverless applications
that take advantage of services like S3, DynamoDB, API Gateway, and others.
Lambda functions can be created, edited, and managed through the AWS Management Console, AWS CLI, or programmatically using the AWS SDKs.
============================================================================================================================================
46 * what is API gateway why we need it ?
AWS API Gateway is a fully managed service that makes it easy for developers to create, publish, monitor, and secure APIs at any scale.
It allows users to create RESTful APIs that can integrate with other AWS services, as well as external services.
API Gateway provides several benefits, including:
Simplified API creation: API Gateway allows users to easily create RESTful APIs by defining the endpoints, methods, request/response bodies,
and integrations with backend services using a simple web interface.
Scalability: API Gateway automatically scales to handle any amount of traffic, so developers don't have to worry about provisioning and
managing servers to handle the load.
Integration with AWS services: API Gateway can integrate with other AWS services such as Lambda,
S3, and DynamoDB, allowing developers to build powerful APIs that can take advantage of these services.
Security: API Gateway provides several mechanisms to secure APIs, including authentication, authorization, and encryption.
Monitoring and logging: API Gateway provides detailed monitoring and logging of API usage,
allowing developers to track and analyze traffic patterns, troubleshoot issues, and optimize API performance.
Overall, API Gateway is a powerful tool for building and managing APIs that can integrate with other AWS services and external systems,
providing a scalable and secure way to expose backend services to external clients.
============================================================================================================================================
47 * How you bind role to account ( script explain)
Open the AWS Management Console and navigate to the IAM dashboard.
Click on "Roles" in the left-hand navigation menu.
Click the "Create role" button.
Select the type of trusted entity for the role. This can be an AWS service or a third-party identity provider.
Choose the permissions to grant to the role. You can either select an existing policy or create a custom policy.
Give the role a name and optionally a description.
Review your settings and click "Create role".
Once the role is created, you can assign it to an AWS resource by specifying the role ARN (Amazon Resource Name) when creating or updating the resource.
Keep in mind that the specific steps for assigning a role to an AWS resource will vary depending on the type of resource you are working with.
-----also we can able to create that with script--------
aws iam create-role --role-name MyEC2Role --assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy --role-name MyEC2Role --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
aws ec2 associate-iam-instance-profile --instance-id i-1234567890abcdef --iam-instance-profile Name=MyEC2Role
============================================================================================================================================
48 * how you configure autoscaling and cloud load balancer
To configure autoscaling and a cloud load balancer in AWS, follow these steps:
Create a launch configuration: A launch configuration is a template that describes the EC2 instance type, AMI, security group, and other details of your instances.
To create a launch configuration, go to the EC2 dashboard, select "Launch Configurations," and follow the prompts.
Create an auto scaling group: An auto scaling group allows you to automatically add or remove instances based on traffic demand.
To create an auto scaling group, go to the EC2 dashboard, select "Auto Scaling Groups," and follow the prompts.
Specify the launch configuration you created in step 1.
Create a load balancer: A load balancer distributes incoming traffic across multiple instances.
To create a load balancer, go to the EC2 dashboard, select "Load Balancers," and follow the prompts.
Choose the appropriate protocol, port, and security settings for your application.
Add instances to the load balancer: Once your instances are up and running in the auto scaling group,
you need to add them to the load balancer. To do this, go to the load balancer dashboard, select your load balancer,
and choose "Add Instance." Select the instances you want to add and click "Register Instances."
Test your setup: To make sure your autoscaling and load balancing configuration is working correctly, test your application by
accessing it through the load balancer's DNS name. You should see traffic being distributed across the instances in the auto scaling group.
By following these steps, you can create a scalable and highly available application environment
in AWS using autoscaling and a cloud load balancer.
============================================================================================================================================
49 * Where your application is running EC2 or EKS
============================================================================================================================================
50 * Why your team went manual instead of using EKS
============================================================================================================================================
51 * Security group ,nacl
Security groups and network access control lists (NACLs) are two types of network security mechanisms in AWS
that control inbound and outbound traffic to resources.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
It is associated with an instance or a network interface and can be customized to control access to specific ports and protocols.
On the other hand, a network access control list (NACL) is an additional layer of security that acts as a firewall for subnets in a VPC.
It controls inbound and outbound traffic at the subnet level and provides granular control over IP traffic.
The main difference between security groups and NACLs is their scope and the level of control they provide.
Security groups operate at the instance level, while NACLs operate at the subnet level.
Security groups provide more granular control over traffic, allowing you to specify the protocol, port, and source/destination IP addresses,
while NACLs are less granular and only allow you to control traffic based on IP addresses and port numbers
============================================================================================================================================
52 * Ecs
============================================================================================================================================
53 * Ec2 s3 iam roles
Creating an EC2 Instance:
Go to the EC2 console.
Click on the "Launch Instance" button.
Select the AMI and Instance type as per your requirement.
Configure the instance details and storage.
Configure the security group and add rules as required.
Review and Launch the instance.
Creating an S3 Bucket:
Go to the S3 console.
Click on the "Create Bucket" button.
Enter a unique name for the bucket.
Select the Region for the bucket.
Configure the settings as per your requirement.
Review and Create the bucket.
Creating an IAM Role:
Go to the IAM console.
Click on the "Roles" option.
Click on the "Create Role" button.
Select the service that will use the role, in this case, select EC2.
Attach the required policies to the role.
Review and Create the role.
After creating the IAM role, you can attach it to the EC2 instance to allow the instance to access the S3 bucket.
You can do this by following these steps:
Go to the EC2 console.
Select the instance that you want to attach the IAM role to.
Click on the "Actions" button.
Select "Instance Settings" and then click on "Attach/Replace IAM Role".
Select the IAM role that you want to attach to the instance.
Click on "Apply" and the IAM role will be attached to the instance.
Once the IAM role is attached to the instance, you can use AWS CLI or SDK to access the S3 bucket from the
EC2 instance without explicitly providing the access key and secret key.
============================================================================================================================================
54 * Lambda
AWS Lambda is a serverless compute service provided by Amazon Web Services (AWS).
It enables users to run code without provisioning or managing servers, and pay only for the compute time consumed by the code.
With AWS Lambda, users can upload their code as a function and configure it to trigger automatically in response to specific events,
such as changes to data in Amazon S3, updates to a DynamoDB table, or API Gateway requests.
The Lambda service takes care of automatically scaling the resources required to run the code in response to the incoming request volume.
Lambda supports several programming languages including Node.js, Python, Java, C#, Go, and Ruby.
It also provides integration with other AWS services, making it easy to build serverless applications
that take advantage of services like S3, DynamoDB, API Gateway, and others.
Lambda functions can be created, edited, and managed through the AWS Management Console, AWS CLI, or programmatically using the AWS SDKs.
============================================================================================================================================
55 * aws instances