eks-lambda-drainer is an Amazon EKS node drainer with AWS Lambda. If you provision spot instances or spotfleet in your Amazon EKS nodegroup, you can listen to the spot termination signal from CloudWatch Events 120 seconds in prior to the final termination process. By configuring this Lambda function as the CloudWatch Event target, eks-lambda-drainer will perform the taint-based eviction on the terminating node and all the pods without relative toleration will be evicted and rescheduled to another node - your workload will get very minimal impact on the spot instance termination.
-
execute
dep ensure -vto make sure all packages required can be downloaded to local -
just type
maketo buiild themain.zipfor Lambda -
sam packageto package the lambda bundlesam package \ --template-file sam.yaml \ --output-template-file sam-packaged.yaml \ --s3-bucket pahud-tmp(change pahud-tmp to your temporary S3 bucket name)
-
sam deployto deploy to AWS Lambdasam deploy \ > --template-file sam-packaged.yaml \ > --stack-name eks-lambda-drainer \ > --capabilities CAPABILITY_IAM
Read Amazon EKS document about how to add an IAM Role to the aws-auth ConfigMap.
Edit the aws-auth ConfigMap by
kubectl edit -n kube-system configmap/aws-auth
And insert rolearn, groups and username into the mapRoles, make sure the groups contain system:masters
You can get the rolearn from the output tab of cloudformation console.
try kubectl describe this node and see the Taints on it
- package the Lambda function in AWS SAM format
- publish to AWS Serverless Applicaton Repository
- ASG/LifeCycle integration #2
- add more samples
ANS: No, eks-lambda-drainer will determine the Amazon EKS cluster name from the EC2 Tags(key=kubernetes.io/cluster/{CLUSTER_NAME} with value=owned). You just need single Lambda function to handle all spot instances from different nodegroups from different Amazon EKS clusters.



