ERROR 1728 (HY000): Cannot load from mysql.db. The table is probably corrupted
This happens due to the schema changes required for different MySQL server versions. The simple fix to this problem recommended by MySQL is to run the mysql_upgrade command from the command line.
mysql_upgrade checks all tables across all databases for incompatibilities with the current version of MySQL. mysql_upgrade also upgrades the system tables so that we can take advantage of new privileges or capabilities that might have been added. It supersedes the older mysql_fix_privilege_tables script, which should no longer be used.
[IMPORTANT NOTE] Before running mysql_upgrade command on production server, it’s always a good practice to take a full backup of all the databases first, just in case something goes wrong.
I tried to run mysql_upgrade command from terminal:
mysql_upgrade -uroot -p
After entering the command, got this error message:

Note: I am not mentioning mysql username and password in the command because I am using .my.cnf configuration
mysql_upgrade will perform a weaker verification. If the result is not equal to 1, then mysql_upgrade cannot be executed.
SELECT SUM(count)=3 FROM ( SELECT COUNT(*) as count FROM
mysql.tables_priv WHERE Table_priv='Select' and User='mysql.session' and
Db='mysql' and Table_name='user' UNION ALL SELECT COUNT(*) as count FROM
mysql.db WHERE Select_priv='Y' and User='mysql.session' and
Db='performance_schema' UNION ALL SELECT COUNT(*) as count FROM
mysql.user WHERE Super_priv='Y' and User='mysql.session') as user_priv;
Query to check whether there are multiple mysql.session users in the mysql.user table:
select user ,host from mysql.user where user='mysql.session';
When multiple users are found, keep the user whose Host is listed as localhost, delete the rest, and then execute mysql_upgrade again, this time I run with –force parameter:
mysql_upgrade -uroot -p --force
It didn’t work even with –force parameter
If the above situation does not exist, please check whether the mysql.session user in the mysql.db and mysql.user tables has the Select_priv privilege:
SELECT * FROM `mysql`.`user` WHERE User='mysql.session';
If there is no such information in the db table, you can directly execute the insert statement:
INSERT INTO `mysql`.`db`(`Host`, `Db`, `User`, `Select_priv`, `Insert_priv`, `Update_priv`, `Delete_priv`, `Create_priv`, `Drop_priv`, `Grant_priv`, `References_priv`, `Index_priv`, `Alter_priv`, `Create_tmp_table_priv`, `Lock_tables_priv`, `Create_view_priv`, `Show_view_priv`, `Create_routine_priv`, `Alter_routine_priv`, `Execute_priv`, `Event_priv`, `Trigger_priv`) VALUES ('localhost', 'performance_schema', 'mysql.session', 'Y', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N');
flush privileges;
After modifying the permissions, execute mysql_upgrade again to succeed.
/usr/bin/mysql_upgrade --force --upgrade-system-tables
After successfully running mysql_upgrade, restart the mysql service so that any changes made to the system tables are ensured to take effect.
systemctl restart mysql
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
]]>
In this setup, we have created 3 EC2 servers(Ubuntu 14.04 LTS) and 1 classic ELB on AWS. Clients both providers and consumers connect to the RabbitMQ servers through ELB, so system will be functional even one or more server will be down in other words system will be operational as long as one of the server is running.
RabbitMQ Requirement(Copied from official RabbitMQ Site):
RabbitMQ nodes address each other using domain names, either short or fully-qualified (FQDNs). Therefore hostnames of all cluster members must be resolvable from all cluster nodes, as well as machines on which command line tools such as rabbitmqctl might be used.
Hostname resolution can use any of the standard OS-provided methods:
DNS records
Local host files (e.g. /etc/hosts)
Settings on AWS:


Connect to each EC2 server and configure the hostname:
sudo vi /etc/hostname
Set the hostname:
After that modified the /etc/hosts file on each host(it will be the same):
sudo vi /etc/hosts
In my case I have given them the hostname rabbitmq-server-01,rabbitmq-server-02 and rabbitmq-server-03. So it will look like this:
After performaning all the steps, download this Repository from the GitHub:
git clone https://github.com/arbabnazar/rabbitmq-cluster.git cd rabbitmq-cluster
Open rabbitmq-cluster/defaults/main.yml and set the values for these variables:
RABBITMQ_ERLANG_COOKIE: "VeryLongRadmonString" RABBITMQ_USERS: - name: "admin" password: "VeryStrongPassword" - name: "dev" password: "AgainStrongPassword" RABBITMQ_VHOSTS: - '/' RABBITMQ_CLUSTERED_HOSTS: - "rabbit@rabbitmq-server-01" - "rabbit@rabbitmq-server-02" - "rabbit@rabbitmq-server-03"
Once you are all set with the variables, then run this command:
ansible-playbook -i rabbitmq-server-01.rbgreek.com,[email protected],[email protected], rabbitmq.yml -u arbabnazar
Note: Please don’t forget to change arbabnazar with your username and rabbitmq-server-01.rbgreek.com,[email protected],[email protected] with your hostname
After successful completion of these tasks, open the management interface from anyone of the server:
http://rabbitmq-server-01.rbgeek.com:15672
Enter the Username and Password:
It will show all the details about the RabbitMQ Cluster:
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>
In short, in this tutorial we’ll do the following tasks using Ansible:
Without going into the further details let’s start, first we’ll create the GitHub OAuth credential for integration and for this using the below link(just replace your github Organization name):
https://github.com/organizations/YOUR-ORG-NAME/settings/applications
In my case the organization name is “tendo-org“, so the name will be:
https://github.com/organizations/tendo-org/settings/applications
At the top of the page, under Organization applications, click Register new application.
Note the “Client ID” and “Client Secret” values that appear (we will need them later). Save in secure place and never share with anyone.
After creating the “Client ID” and “Client Secret” values, download this Repository from the GitHub:
git clone https://github.com/arbabnazar/ansible-jenkins.git cd ansible-jenkins
Open jenkins/defaults/main.yml and set the values for these variables:
| GITHUB_ORG: "tendo-org" | |
| GITHUB_CLINT_ID: "7e449bb096825c6b6c19" | |
| GITHUB_SECRET_ID: "e1e1d4b217a2d39f5bf4c73bec4c0e5b7fa37f01" | |
| GITHUB_OAUTH_SCOPES: "read:org,user:email" | |
| JENKINS_ADMIN_GROUP: "{{ GITHUB_ORG }}*admins" | |
| JENKINS_DEVELOPER_GROUP: "{{ GITHUB_ORG }}*members" |
GITHUB_ORG: Name of your GitHub Organization GITHUB_CLINT_ID: OAuth Client ID that we have created above GITHUB_SECRET_ID: OAuth Secret ID that we have created above GITHUB_OAUTH_SCOPES: Scope of OAuth application JENKINS_ADMIN_GROUP: GitHub group that can administrate the Jenkins JENKINS_DEVELOPER_GROUP: GitHub group that is allowed to use this Jenkins(all job or one based on permission)
Then open apache/defaults/main.yml and set the values of these variables:
| ssl_cert_path: "/etc/ssl/cert.pem" | |
| ssl_key_path: "/etc/ssl/privkey.pem" | |
| ssl_chain_path: "/etc/ssl/fullchain.pem" | |
| redirect_port: 8080 | |
| APACHE_SITES: | |
| - sitename: "jenkins.rbgeek.com" | |
| servername: "jenkins.rbgeek.com" | |
| serveradmin: "[email protected]" | |
| listen: "80" | |
| rewrite: True | |
| state: link | |
| - sitename: "jenkins.rbgeek.com-ssl" | |
| servername: "jenkins.rbgeek.com" | |
| serveradmin: "[email protected]" | |
| listen: "443" | |
| ssl: 'ssl' | |
| state: link |
Once you are all set with the variables, then run this command:
ansible-playbook -i "jenkins.rbgeek.com," jenkins.yml -u arbabnazar
Note: Please don’t forget to change arbabnazar with your username and jenkins.rbgeek.com with your hostname
After successful completion of these tasks:
Open your Jenkins url in browser(in my case it is jenkins.rbgeek.com) and click on Log in, it will redirect you to the GitHub:
Enter your github username and password:
It will ask for Authorization:
Once you done, it will take you back to the Jenkins:
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>First, download this Repository from the GitHub:
git clone https://github.com/arbabnazar/ansible-roles.git
Note: If git is not installed then you can simply download the zip file.
Move inside the cloned directory:
cd ansible-roles
To use this role, edit the site.yml file:
vi site.yml
| --- | |
| - hosts: server | |
| become: yes | |
| gather_facts: yes | |
| roles: | |
| - users-with-github-key |
After that, edit the hosts file and enter the ip of your remote server, on which you want to perform all these tasks:
vi hosts
In my case, it is 192.168.33.100:
[server] 192.168.33.100
After that edit the users-with-github-key/defaults/main.yml file:
Change the username(s), type(either admin/sudo user or not) and state of the user(want to create or remove it on the target system). These are self explanatory.
There are generally two classes of users: (1) admin users with full sudo permissions (2) normal login users without any special permissions
The parameter "type" sets the user in one of these two categories: (1) type: admin (2) type: normal
| --- | |
| users_list: | |
| - username: arbabnazar | |
| type: admin | |
| state: present | |
| - username: arbab786 | |
| type: admin | |
| state: present | |
| - username: tendo | |
| type: normal | |
| state: absent |
Then run this command:
ansible-playbook -i hosts -u arbabnazar site.yml
Note: Please don’t forget to change arbabnazar with your username, although it first try to use login user but I have mentioned it explicitly to show the complete example.
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>The steps for ansible-pull are:
1. Pull the git repo containing your playbooks.
2. That repo is cloned to the mentioned directory.
3. ansible-pull starts executing the local.yml found in your cloned repo directory.
Let’s assume that you want to pull the code from the private git repo and for this you need the ssh private key but you have taken the updated vanilla ubuntu ami from the Marketplace, then how you will clone this private repo? For this we’ll use the Bootstrap Pattern:
– Put the private part of ssh key for the git repository on S3.
– Getting ssh key from s3 bucket using IAM role credentials
For this, create a S3 bucket(in my case it is named “tendo-github-key-s3“):
Upload the desired ssh key to this bucket:
bitbucket is custom ssh client config file which tells the OS to use the Custom identity file in order to connect to bitbucket to fetch the private repo. Content of this file is:
Host bitbucket.org IdentityFile /root/.ssh/bitbucket_secret_key TCPKeepAlive yes IdentitiesOnly yes
Next we need to create the IAM policy and role for EC2 instance to give the access on those files from S3 bucket:
Create policy to access S3 bucket. Select “Create Your Own Policy“:
Enter Policy Name and the Policy Document as given below(adjust as per your requirement):
| { | |
| "Version": "2012-10-17", | |
| "Statement": [ | |
| { | |
| "Effect": "Allow", | |
| "Action": [ | |
| "s3:ListBucket" | |
| ], | |
| "Resource": [ | |
| "arn:aws:s3:::tendo-github-key-s3" | |
| ] | |
| }, | |
| { | |
| "Effect": "Allow", | |
| "Action": [ | |
| "s3:GetObject" | |
| ], | |
| "Resource": [ | |
| "arn:aws:s3:::tendo-github-key-s3/*" | |
| ] | |
| } | |
| ] | |
| } |
Create Role by giving the name:
Select Role Type as “Amazon EC2“:
Then attach a policy – “tendo-github-key-s3“:
Now IAM Role and Policy is ready. Let’s Launch the instance with this IAM Role.
Here I am creating single instance for demonstration because the purpose of this tutorial is to show you the ansible-pull feature not the AWS autoscaling but you can use the exact same procedure while launching the instance inside the Autoscaling group (manually or automated way):
Launch the Ubuntu instance here, select the IAM role which was created above and enter the User data:
Used the following user data, which will install the mentioned packages, fetch the desired files from S3 bucket and run the ansible in pull mode:
| #!/bin/bash | |
| apt-get update | |
| apt-get install -y libffi-dev g++ libssl-dev python-pip python-dev git | |
| pip install -U awscli ansible setuptools | |
| aws s3 cp s3://tendo-github-key-s3/git-private-key /root/.ssh/bitbucket_secret_key | |
| chmod 400 /root/.ssh/bitbucket_secret_key | |
| aws s3 cp s3://tendo-github-key-s3/bitbucket /root/.ssh/config | |
| chmod 600 /root/.ssh/config | |
| ansible-pull -d /root/playbooks -i 'localhost,' -U [email protected]:arbabnazar/pull-test.git --accept-host-key |
We invoke ansible-pull with the following options, there are many more than these:
1. --accept-host-key: adds the hostkey for the repo url if not already added 2. -U: URL of the playbook repository 3. -d: directory to checkout repository to 4. –i localhost,: This option indicates the inventory that needs to be considered. Since we're only concerned about one host, we use -i localhost,.
Once the server is Up and Running, you can log in and review the /var/log/cloud-init-output.log for more information:
sudo vi /var/log/cloud-init-output.log
There are tons of logs but these are of our interest:
download: s3://tendo-github-key-s3/git-private-key to root/.ssh/bitbucket_secret_key download: s3://tendo-github-key-s3/bitbucket to root/.ssh/config Starting Ansible Pull at 2016-05-15 13:41:28 /usr/local/bin/ansible-pull -d /root/playbooks -i localhost, -U [email protected]:arbabnazar/pull-test.git --accept-host-key localhost | SUCCESS => { "after": "ce0a3743f7de573cb3cbd219e39e026d665aa62b", "before": null, "changed": true } PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [localhost]
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>
Terraform v0.6.15
Before using the terraform, we need to export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxx" export AWS_SECRET_ACCESS_KEY="yyyyyyyyyyyyyyyyyyyy"
The first step is to create an S3 bucket that will act as the ‘origin‘ in cloudfront distribution, this will be the place where all of your static files and assets will live. Here’s the code to do this:
| // Setup your S3 Bucket | |
| resource "aws_s3_bucket" "cdn_bucket" { | |
| bucket = "${var.bucket_name}" | |
| acl = "public-read" | |
| policy = <<POLICY | |
| { | |
| "Version":"2012-10-17", | |
| "Statement":[{ | |
| "Sid":"PublicReadForGetBucketObjects", | |
| "Effect":"Allow", | |
| "Principal": "*", | |
| "Action":"s3:GetObject", | |
| "Resource":["arn:aws:s3:::${var.bucket_name}/*" | |
| ] | |
| } | |
| ] | |
| } | |
| POLICY | |
| } |
| // Setup the CloudFront Distribution | |
| resource "aws_cloudfront_distribution" "cloudfront_distribution" { | |
| origin { | |
| domain_name = "${var.bucket_name}.s3.amazonaws.com" | |
| origin_id = "S3-${var.bucket_name}" | |
| s3_origin_config {} | |
| } | |
| enabled = true | |
| price_class = "${var.price_class}" | |
| default_cache_behavior { | |
| allowed_methods = [ "DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT" ] | |
| cached_methods = [ "GET", "HEAD" ] | |
| target_origin_id = "S3-${var.bucket_name}" | |
| forwarded_values { | |
| query_string = true | |
| cookies { | |
| forward = "none" | |
| } | |
| } | |
| viewer_protocol_policy = "allow-all" | |
| min_ttl = 0 | |
| default_ttl = 3600 | |
| max_ttl = 86400 | |
| } | |
| retain_on_delete = "${var.retain_on_delete}" | |
| viewer_certificate { | |
| cloudfront_default_certificate = true | |
| } | |
| restrictions { | |
| geo_restriction { | |
| restriction_type = "none" | |
| } | |
| } | |
| } |
Let’s examine some of the important parameters:
domain_name: points to the origin which is S3 bucket endpoint.
origin_id: A unique identifier for this origin configuration, which is the name of the S3 bucket plus “S3” keyword
s3_origin_config: Extra S3 origin options, we leave this blank
enabled: Enable our CloudFront distribution
price_class: Price varies depending on the edge location from which CloudFront serves the requests.
retain_on_delete: Decide whether delete or disable the cloudfront distribution upon terraform destory command
allowed_methods: Which HTTP requests we permit the distribution to serve
cached_methods: Which HTTP requests we let this behaviour apply to
target_origin_id: we use the same as “origin_id”
forwarded_values: Entities that will be passed from the edge to our origin.
viewer_protocol_policy: Which HTTP protocol policy to enforce. For example: allow-all, https-only, or redirect-to-https.
min_ttl: Minimum time (seconds) for objects to live in the distribution cache
max_ttl: Maximum time (seconds) that objects can live in the distribution cache
default_ttl: The default time (seconds) for the objects to live in the distribution cache
Last step is to add the Route53 records to reference the CloudFront distribution that we have created above.
| //Add Root Route53 Records | |
| resource "aws_route53_record" "main_record" { | |
| zone_id = "${var.hosted_zone_id}" | |
| name = "${var.route53_record_name}.${var.domain_name}" | |
| type = "A" | |
| alias { | |
| name = "${aws_cloudfront_distribution.cloudfront_distribution.domain_name}" | |
| zone_id = "${var.alias_zone_id}" | |
| evaluate_target_health = false | |
| } | |
| } |
Few critical pieces you should know about:
zone_id: ID for the domain hosted zone
domain_name: Name of the domain where record(s) need to create
route53_record_name: Name of the record that you want to create for CDN
alias_zone_id: Fixed hardcoded constant zone_id that is used for all CloudFront distributions
Required variables:
Modify the variables as per your requirements.
| variable aws_region { | |
| default = "us-east-1" | |
| } | |
| variable bucket_name { | |
| description = "name of the bucket that will use as origin for CDN" | |
| default = "tendo-cdn-bucket" | |
| } | |
| variable retain_on_delete { | |
| description = "Instruct CloudFront to simply disable the distribution instead of delete" | |
| default = false | |
| } | |
| variable price_class { | |
| description = "Price classes provide you an option to lower the prices you pay to deliver content out of Amazon CloudFront" | |
| default = "PriceClass_All" | |
| } | |
| variable hosted_zone_id { | |
| description = "ID for the domain hosted zone" | |
| default = "XXXXXXXXXXXXXX" | |
| } | |
| variable domain_name { | |
| description = "Name of the domain where record(s) need to create" | |
| default = "tendo.com" | |
| } | |
| variable route53_record_name { | |
| description = "Name of the record that you want to create for CDN" | |
| default = "tend-cdn" | |
| } | |
| variable alias_zone_id { | |
| description = "Fixed hardcoded constant zone_id that is used for all CloudFront distributions" | |
| default = "Z2FDTNDATAQYW2" | |
| } |
To Generate and show an execution plan (dry run):
terraform plan
To Builds or makes actual changes in infrastructure:
terraform apply
After successful completion of terraform plan(wait for 15 to 20 minutes), login to the AWS Web Console and verify the resources:
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>First we need to clone the terraform’s github offical repo:
git clone https://github.com/hashicorp/terraform.git
Move inside the cloned repo:
cd terraform
A Vagrantfile is provided with the official repo that uses the Vagrant virtual machine to provide a consistent environment with the pre-requisite tools in place.
Start the fresh VM(will download the vagrant box of around 350M):
vagrant up
Login to the machine:
vagrant ssh
Vagrant box comes with pre-installed Go and configures the $GOPATH at /opt/gopath, current “terraform” directory is then sync’d into the gopath:
cd /opt/gopath/src/github.com/hashicorp/terraform/
Verify the unit tests:
make test
Build the development version of Terraform from master or specific PR:
make bin
It will start process (which will take around 4/5 hours) to generate the binaries for each supported platforms and places them in the pkg directory.
njoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>We’ll use the Terraform to create the fully operational AWS VPC infrastructure(subnets,routeing tables,igw etc), it will also create everything that need to be for creating EC2 and RDS instances (security key, security group, subnet group). It will also create the Elastic Load Balancer and add the EC2 instance(s) automatically to it as well as creating the Route53 entry for this wordpress site and add the ELB alias to it.
Ansible will be used to deploy the wordpress on the EC2 instances that have been created via Terraform, that will be fault tolerant and highly available.
Requirements:
Terraform
Ansible
AWS admin access
Tools Used:
#ansible --version ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides
#terraform version Terraform v0.6.11
Before using the terraform, we need to export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxx" export AWS_SECRET_ACCESS_KEY="yyyyyyyyyyyyyyyyyyyy"
After doing/verifying all the above things, download this Repository from the GitHub:
git clone https://github.com/arbabnazar/terraform-ansible-aws-vpc-ha-wordpress.git cd terraform-aws
Terraform AWS Modules:
The purpose of Terraform AWS Modules are to create a fully operational AWS VPC infrastructure(subnets,routeing tables,igw etc), it will also create everything that need to be for creating EC2 and RDS instances (security key, security group, subnet group).
It will also create the Elastic Load Balancer and add the EC2 instance(s) automatically that were created using this playbook as well as creating the Route53 entry for this site and add the ELB alias to it.
Terraform AWS Modules Tasks:
All informations about VPC, Webserver, RDS, ELB, Route53 are defined in their respective modules.
Variables for your Infrastructure:
Rename the file called terraform.tfvars-sample into the terraform.tfvars and change the values as per your requirement:
mv terraform.tfvars-sample terraform.tfvars
| rds_password = "securepassword" | |
| aws_region = "us-east-1" | |
| domain_name = "tendo.com" | |
| hosted_zone_id = "Z132HODITPRQ5P" |
To Generate and show an execution plan (dry run):
terraform plan
To Builds or makes actual changes in infrastructure:
terraform apply
To inspect Terraform state or plan:
terraform show
To destroy Terraform-managed infrastructure:
terraform destroy
Note: Terraform stores the state of the managed infrastructure from the last time Terraform was run. Terraform uses the state to create plans and make changes to the infrastructure.
After successful completion of terraform plan, login to the AWS Web Console and verify the resources:
VPC:
EC2:
RDS:
Route53:
Ansible Role after Terraform Provisioning:
Once the Terraform will create all the resources over AWS, you can use the Ansible to install the wordpress over the EC2 instance(s). To use the provided role, move into the ansible directory:
cd ansible
Provided role will install the wordpress on all the servers that have been created via the terraform. To use the provided role, run the following command:
ansible-playbook site.yml -e@../secret/secure.yml -e@../terraform-aws/tendo-dev.yml
and use this command if you are using encrypted file:
ansible-playbook site.yml -e@../secret/secure.yml -e@../terraform-aws/tendo-dev.yml --ask-vault-pass
secure.yml file will be used to overwrite the variables inside the role. This file must keep secure using ansible vault but I have left it encrypted so that you can take an idea that what it’s contain while tendo-dev.yml contain the dnsname of the RDS and this file will create during the terraform execution and it’s name based on the values of these variables:
Note: terraform.py is dynamic inventory created by CiscoCloud
After successful completion of these tasks, it will show you the summary, something like this:
Navigate to the site in web browser using the fqdn(in my case, it is http://www.rbgeek.com), and verify that the wordpress is installed successfully:
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>If you have completed the previous parts of this series, then you have already clone the git repo that contains all the roles, if not then clone the git repo:
git clone https://github.com/arbabnazar/ansible-aws-roles.git cd ansible-aws-roles
Modified the aws.yml playbook to add the desired roles:
| --- | |
| - hosts: localhost | |
| connection: local | |
| gather_facts: no | |
| roles: | |
| - vpc | |
| - ec2sg | |
| - ec2key | |
| - ec2instance | |
| - elb | |
| - rds |
Note: May be, you have already noticed that we have also added the vpc, ec2sg, ec2key, ec2instance and elb roles in the playbook, it will not re-create all this except the EC2 instance(this role is not idempotent), if you have created them in the previous parts, because Ansible is idempotent.
Review/modify the variable file for EC2 Instance, see roles/rds/defaults/main.yml:
| --- | |
| # Variables that can provide as extra vars | |
| VPC_REGION: us-east-1 # N.Virginia | |
| RDS_SUBNET_GROUP_NAME: "my_test_subnet_group" | |
| RDS_SG_DESCRIPTION: "My Subnet Group for wordpress rds instance" | |
| RDS_SG_SUBNETS: [] | |
| RDS_MULTI_ZONE_OPTION: no | |
| RDS_SG_NAME: "test_rd_sg" | |
| RDS_INSTANCE_NAME: "my-test-rds" | |
| RDS_DB_ENGINE: MySQL | |
| RDS_DB_SIZE: 5 | |
| RDS_DB_NAME: "rds_test" | |
| RDS_INSTANCE_TYPE: "db.t2.micro" | |
| RDS_DB_USERNAME: admin | |
| RDS_DB_PASSWORD: test | |
| RDS_BACKUP_RETENTION_PERIOD: 0 | |
| RDS_PUBLICLY_ACCESSIBLE: yes | |
| RDS_WAIT_TIMEOUT: 300 | |
| # Use inside the tasks | |
| vpc_region: "{{ VPC_REGION }}" | |
| rds_subnet_group_name: "{{ RDS_SUBNET_GROUP_NAME }}" | |
| rds_sg_description: "{{ RDS_SG_DESCRIPTION }}" | |
| rds_sg_subnets: "{{ RDS_SG_SUBNETS }}" | |
| rds_multi_zone_option: "{{ RDS_MULTI_ZONE_OPTION }}" | |
| rds_sg_name: "{{ RDS_SG_NAME }}" | |
| rds_instance_name: "{{ RDS_INSTANCE_NAME }}" | |
| rds_db_engine: "{{ RDS_DB_ENGINE }}" | |
| rds_db_size: "{{ RDS_DB_SIZE }}" | |
| rds_db_name: "{{ RDS_DB_NAME }}" | |
| rds_instance_type: "{{ RDS_INSTANCE_TYPE }}" | |
| rds_db_username: "{{ RDS_DB_USERNAME }}" | |
| rds_db_password: "{{ RDS_DB_PASSWORD }}" | |
| rds_backup_retention_period: "{{ RDS_BACKUP_RETENTION_PERIOD }}" | |
| rds_publicly_accessible: "{{ RDS_PUBLICLY_ACCESSIBLE }}" | |
| rds_wait_timeout: "{{ RDS_WAIT_TIMEOUT }}" |
We need to modify the values of all the variables that are uppercase. For this, we’ll set them in the external file(in my case, it is secret.yml) which already contains our VPC,Security Groups and EC2 Key Pair variables:
| --- | |
| # Environment specific variables | |
| COMPANY: rbgeek | |
| ENVIRONMENT: dev | |
| SERVER_ROLE: web | |
| # VPC specific variables | |
| VPC_NAME: "{{ COMPANY }}-{{ ENVIRONMENT }}" | |
| VPC_REGION: eu-west-1 # Ireland | |
| VPC_CIDR: "10.10.0.0/16" | |
| VPC_CLASS_DEFAULT: "10.10" | |
| #Github username for creating EC2 KEY Pair | |
| GITHUB_USERNAME: "arbabnazar" | |
| EC2_KEY_NAME: "{{ GITHUB_USERNAME }}-github-key" | |
| LOCAL_USER_SSH_KEY: no | |
| # Ubuntu AMI specific variables | |
| AMI_NAME: "ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*" | |
| AMI_OWNER: "099720109477" | |
| # EC2 instances specific variables | |
| EC2_INSTANCE_TYPE: t2.nano | |
| EC2_SECURITY_GROUP_NAME: | |
| - "{{ VPC_NAME }}-{{ SSH_SG_NAME }}" | |
| - "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| EC2_VOLUME_SIZE: 10 | |
| EC2_COUNT: 1 | |
| EC2_SUBNET_ID: | |
| - "{{ PUBLIC_SUBNET_1 }}" | |
| - "{{ PUBLIC_SUBNET_2 }}" | |
| # RDS specific variables | |
| RDS_SUBNET_GROUP_NAME: "{{ VPC_NAME }}-subnet-group" | |
| RDS_SG_DESCRIPTION: "Subnet Group for rds instances inside {{ VPC_NAME }} VPC" | |
| RDS_SG_SUBNETS: | |
| - "{{ PRIVATE_SUBNET_1 }}" | |
| - "{{ PRIVATE_SUBNET_2 }}" | |
| RDS_SG_NAME: "{{ VPC_NAME }}-{{ DATABASE_SG_NAME }}" | |
| RDS_MULTI_ZONE_OPTION: no | |
| RDS_INSTANCE_NAME: "{{ COMPANY }}-{{ ENVIRONMENT }}-rds" | |
| RDS_DB_ENGINE: MySQL | |
| RDS_DB_SIZE: 10 | |
| RDS_DB_NAME: "mydatabase" | |
| RDS_INSTANCE_TYPE: "db.t2.micro" | |
| RDS_DB_USERNAME: root | |
| RDS_DB_PASSWORD: "verystrongpassword" | |
| RDS_BACKUP_RETENTION_PERIOD: 1 | |
| RDS_PUBLICLY_ACCESSIBLE: no | |
| RDS_WAIT_TIMEOUT: 1800 | |
| # Elastic Load Balancer specific variables | |
| ELB_NAME: "{{ COMPANY }}-{{ ENVIRONMENT }}-{{ SERVER_ROLE }}-elb" | |
| ELB_SUBNET_ID: | |
| - "{{ PUBLIC_SUBNET_1 }}" | |
| - "{{ PUBLIC_SUBNET_2 }}" | |
| ELB_PURGE_SUBNETS: yes | |
| ELB_CROSS_AZ_LOAD_BALANCING: yes | |
| ELB_PING_PROTOCOL: tcp | |
| ELB_PING_PORT: 80 | |
| ELB_RESPONSE_TIMEOUT: 5 | |
| ELB_INTERVAL: 30 | |
| ELB_UNHEALTHY_THRESHOLD: 2 | |
| ELB_HEALTHY_THRESHOLD: 10 | |
| ELB_CONNECTION_DRAINING_TIMEOUT: 60 | |
| ELB_SECURITY_GROUP_NAME: "{{ VPC_NAME }}-{{ ELB_SG_NAME }}" | |
| ELB_LISTENERS: | |
| - protocol: http | |
| load_balancer_port: 80 | |
| instance_protocol: http | |
| instance_port: 80 | |
| - protocol: https | |
| load_balancer_port: 443 | |
| instance_protocol: http | |
| instance_port: 80 | |
| ssl_certificate_id: "arn:aws:iam::189142601945:server-certificate/tendo-crt" | |
| # EC2 Security Groups specific variables | |
| WEB_SG_NAME: "webserver-sg" | |
| DATABASE_SG_NAME: "rds-sg" | |
| SSH_SG_NAME: "ssh-sg" | |
| ELB_SG_NAME: "elb-sg" | |
| # Security Groups | |
| EC2_SECURITY_GROUPS: "{{ SSH_SG + WEB_SG + DATABASE_SG + ELB_SG }}" | |
| # Secrity Groups info(Name, Description and Rules) for Web, RDS and ELB | |
| SSH_SG: | |
| - name: "{{ VPC_NAME }}-{{ SSH_SG_NAME }}" | |
| description: "This sg is for remote access to instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 22 | |
| to_port: 22 | |
| cidr_ip: 0.0.0.0/0 | |
| WEB_SG: | |
| - name: "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| description: "This sg is for web instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 80 | |
| to_port: 80 | |
| cidr_ip: 0.0.0.0/0 | |
| - proto: tcp | |
| from_port: 443 | |
| to_port: 443 | |
| cidr_ip: 0.0.0.0/0 | |
| DATABASE_SG: | |
| - name: "{{ VPC_NAME }}-{{ DATABASE_SG_NAME }}" | |
| description: "This sg is for rds instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 3306 | |
| to_port: 3306 | |
| group_name: "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| ELB_SG: | |
| - name: "{{ VPC_NAME }}-{{ ELB_SG_NAME }}" | |
| description: "This sg is for ELB inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 80 | |
| to_port: 80 | |
| cidr_ip: 0.0.0.0/0 |
This file must keep in secret place and encrypt with ansible vault.
Once you are all set with the variables, then run this command if you have added all the roles in the playbook:
ansible-playbook -i inventory/hosts aws.yml -e@secret_vars/secret.yml
But please note that it will create the EC2 instance, even you have already created.
Else use this command if you have not added the vpc and security group roles in the playbook:
ansible-playbook -i inventory/hosts aws.yml-e@secret_vars/secret.yml -e@secret_vars/rbgeek-dev.yml
After successful completion of playbook, login to the AWS Web Console and verify the resources:


Extra Info: I have written a simple lookup plugin to find the Security Group ID from it’s name because RDS instance module only accept the ID, not the name:
| ''' | |
| USAGE: | |
| - debug: | |
| msg: "{{ lookup('get_sg_id_from_name', (vpc_region, rds_sg_name)) }}" | |
| ''' | |
| from ansible.errors import * | |
| from ansible.plugins.lookup import LookupBase | |
| try: | |
| import boto | |
| import boto.ec2 | |
| except ImportError: | |
| raise AnsibleError("get_sg_id_from_name lookup cannot be run without boto installed") | |
| class LookupModule(LookupBase): | |
| def run(self, terms, variables=None, **kwargs): | |
| region = terms[0][0] | |
| sg_name = terms[0][1] | |
| if isinstance(sg_name, basestring): | |
| sg_name = sg_name | |
| ec2_conn = boto.ec2.connect_to_region(region) | |
| sg = ec2_conn.get_all_security_groups(filters={'group_name': sg_name})[0] | |
| return [sg.id] |
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
[contact-form]
]]>If you have completed the previous parts of this series, then you have already clone the git repo that contains all the roles, if not then clone the git repo:
git clone https://github.com/arbabnazar/ansible-aws-roles.git cd ansible-aws-roles
Modified the aws.yml playbook to add the desired roles:
| --- | |
| - hosts: localhost | |
| connection: local | |
| gather_facts: no | |
| roles: | |
| - vpc | |
| - ec2sg | |
| - ec2key | |
| - ec2instance | |
| - elb |
May be, you have already noticed that we have also added the vpc,ec2sg and ec2key roles in the playbook, it will not re-create all this, if you have created them in the previous parts, because Ansible is idempotent.
Review/modify the variable file for EC2 Instance, see roles/ec2instance/defaults/main.yml:
| --- | |
| # Variables that can provide as extra vars | |
| VPC_NAME: test | |
| VPC_REGION: us-east-1 # N.Virginia | |
| EC2_INSTANCE_TYPE: t2.micro | |
| EC2_KEY_NAME: "my-default-key" | |
| EC2_SECURITY_GROUP_NAME: "test" | |
| EC2_COUNT: 1 | |
| EC2_VOLUME_SIZE: 8 | |
| EC2_SUBNET_ID: [] | |
| # Example of EC2_SUBNET_ID | |
| # EC2_SUBNET_ID: | |
| # - "subnet-0c3e0b7b" | |
| # - "subnet-bf672ae6" | |
| AMI_NAME: "ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*" | |
| AMI_OWNER: "099720109477" | |
| # Tags | |
| ENVIRONMENT: test | |
| SERVER_ROLE: test | |
| # Use inside the tasks | |
| vpc_name: "{{ VPC_NAME }}" | |
| vpc_region: "{{ VPC_REGION }}" | |
| ec2_instance_type: "{{ EC2_INSTANCE_TYPE }}" | |
| ec2_key_name: "{{ EC2_KEY_NAME }}" | |
| ec2_security_group_name: "{{ EC2_SECURITY_GROUP_NAME }}" | |
| ec2_volume_size: "{{ EC2_VOLUME_SIZE }}" | |
| ec2_count: "{{ EC2_COUNT }}" | |
| ec2_subnet_id: "{{ EC2_SUBNET_ID }}" | |
| # Please don't change the variables below, until you know what you are doing | |
| # Only Ubuntu distribution is supported | |
| ami_name: "{{ AMI_NAME }}" | |
| ami_owner: "{{ AMI_OWNER }}" |
Also review/modify the variable file for ELB, see roles/elb/defaults/main.yml:
| --- | |
| # Variables that can provide as extra vars | |
| VPC_REGION: us-east-1 # N.Virginia | |
| ELB_NAME: "test" | |
| ELB_SUBNET_ID: [] | |
| ELB_PURGE_SUBNETS: no | |
| ELB_CROSS_AZ_LOAD_BALANCING: yes | |
| ELB_PING_PROTOCOL: tcp | |
| ELB_PING_PORT: 80 | |
| ELB_RESPONSE_TIMEOUT: 5 | |
| ELB_INTERVAL: 30 | |
| ELB_UNHEALTHY_THRESHOLD: 2 | |
| ELB_HEALTHY_THRESHOLD: 10 | |
| ELB_CONNECTION_DRAINING_TIMEOUT: 60 | |
| ELB_SECURITY_GROUP_NAME: "test" | |
| ELB_LISTENERS: | |
| - protocol: http | |
| load_balancer_port: 80 | |
| instance_protocol: http | |
| instance_port: 80 | |
| # Use inside the tasks | |
| vpc_region: "{{ VPC_REGION }}" | |
| elb_name: "{{ ELB_NAME }}" | |
| elb_subnet_id: "{{ ELB_SUBNET_ID }}" | |
| elb_purge_subnets: "{{ ELB_PURGE_SUBNETS }}" | |
| elb_cross_az_load_balancing: "{{ ELB_CROSS_AZ_LOAD_BALANCING }}" | |
| elb_connection_draining_timeout: "{{ ELB_CONNECTION_DRAINING_TIMEOUT }}" | |
| elb_security_group_name: "{{ ELB_SECURITY_GROUP_NAME }}" | |
| elb_listeners: "{{ ELB_LISTENERS }}" | |
| elb_health_check: | |
| ping_protocol: "{{ ELB_PING_PROTOCOL }}" | |
| ping_port: "{{ ELB_PING_PORT }}" | |
| response_timeout: "{{ ELB_RESPONSE_TIMEOUT }}" | |
| interval: "{{ ELB_INTERVAL }}" | |
| unhealthy_threshold: "{{ ELB_UNHEALTHY_THRESHOLD }}" | |
| healthy_threshold: "{{ ELB_HEALTHY_THRESHOLD }}" |
We need to modify the values of all the variables that are uppercase. For this, we’ll set them in the external file(in my case, it is secret.yml) which already contains our VPC,Security Groups and EC2 Key Pair variables:
| --- | |
| # Environment specific variables | |
| COMPANY: rbgeek | |
| ENVIRONMENT: dev | |
| SERVER_ROLE: web | |
| # VPC specific variables | |
| VPC_NAME: "{{ COMPANY }}-{{ ENVIRONMENT }}" | |
| VPC_REGION: eu-west-1 # Ireland | |
| VPC_CIDR: "10.10.0.0/16" | |
| VPC_CLASS_DEFAULT: "10.10" | |
| #Github username for creating EC2 KEY Pair | |
| GITHUB_USERNAME: "arbabnazar" | |
| EC2_KEY_NAME: "{{ GITHUB_USERNAME }}-github-key" | |
| LOCAL_USER_SSH_KEY: no | |
| # Ubuntu AMI specific variables | |
| AMI_NAME: "ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*" | |
| AMI_OWNER: "099720109477" | |
| # EC2 instances specific variables | |
| EC2_INSTANCE_TYPE: t2.nano | |
| EC2_SECURITY_GROUP_NAME: | |
| - "{{ VPC_NAME }}-{{ SSH_SG_NAME }}" | |
| - "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| EC2_VOLUME_SIZE: 10 | |
| EC2_COUNT: 1 | |
| EC2_SUBNET_ID: | |
| - "{{ PUBLIC_SUBNET_1 }}" | |
| - "{{ PUBLIC_SUBNET_2 }}" | |
| # Elastic Load Balancer specific variables | |
| ELB_NAME: "{{ COMPANY }}-{{ ENVIRONMENT }}-{{ SERVER_ROLE }}-elb" | |
| ELB_SUBNET_ID: | |
| - "{{ PUBLIC_SUBNET_1 }}" | |
| - "{{ PUBLIC_SUBNET_2 }}" | |
| ELB_PURGE_SUBNETS: yes | |
| ELB_CROSS_AZ_LOAD_BALANCING: yes | |
| ELB_PING_PROTOCOL: tcp | |
| ELB_PING_PORT: 80 | |
| ELB_RESPONSE_TIMEOUT: 5 | |
| ELB_INTERVAL: 30 | |
| ELB_UNHEALTHY_THRESHOLD: 2 | |
| ELB_HEALTHY_THRESHOLD: 10 | |
| ELB_CONNECTION_DRAINING_TIMEOUT: 60 | |
| ELB_SECURITY_GROUP_NAME: "{{ VPC_NAME }}-{{ ELB_SG_NAME }}" | |
| ELB_LISTENERS: | |
| - protocol: http | |
| load_balancer_port: 80 | |
| instance_protocol: http | |
| instance_port: 80 | |
| - protocol: https | |
| load_balancer_port: 443 | |
| instance_protocol: http | |
| instance_port: 80 | |
| ssl_certificate_id: "arn:aws:iam::xxxxxxxxxxxx:server-certificate/tendo-crt" | |
| # EC2 Security Groups specific variables | |
| WEB_SG_NAME: "webserver-sg" | |
| DATABASE_SG_NAME: "rds-sg" | |
| SSH_SG_NAME: "ssh-sg" | |
| ELB_SG_NAME: "elb-sg" | |
| # Security Groups | |
| EC2_SECURITY_GROUPS: "{{ SSH_SG + WEB_SG + DATABASE_SG + ELB_SG }}" | |
| # Secrity Groups info(Name, Description and Rules) for Web, RDS and ELB | |
| SSH_SG: | |
| - name: "{{ VPC_NAME }}-{{ SSH_SG_NAME }}" | |
| description: "This sg is for remote access to instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 22 | |
| to_port: 22 | |
| cidr_ip: 0.0.0.0/0 | |
| WEB_SG: | |
| - name: "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| description: "This sg is for web instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 80 | |
| to_port: 80 | |
| cidr_ip: 0.0.0.0/0 | |
| - proto: tcp | |
| from_port: 443 | |
| to_port: 443 | |
| cidr_ip: 0.0.0.0/0 | |
| DATABASE_SG: | |
| - name: "{{ VPC_NAME }}-{{ DATABASE_SG_NAME }}" | |
| description: "This sg is for rds instances inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 3306 | |
| to_port: 3306 | |
| group_name: "{{ VPC_NAME }}-{{ WEB_SG_NAME }}" | |
| ELB_SG: | |
| - name: "{{ VPC_NAME }}-{{ ELB_SG_NAME }}" | |
| description: "This sg is for ELB inside {{ VPC_NAME }} VPC" | |
| rules: | |
| - proto: tcp | |
| from_port: 80 | |
| to_port: 80 | |
| cidr_ip: 0.0.0.0/0 |
This file must keep in secret place and encrypt with ansible vault.
Once you are all set with the variables, then run this command if you have added the vpc, security group and ec2 key pair roles in the playbook:
ansible-playbook -i inventory/hosts aws.yml -e@secret_vars/secret.yml
Else use this command if you have not added the vpc,security group and ec2 key pair roles in the playbook:
ansible-playbook -i inventory/hosts aws.yml-e@secret_vars/secret.yml -e@secret_vars/rbgeek-dev.yml
After successful completion of playbook, login to the AWS Web Console and verify the resources:

We have set the count: 1 inside the secret.yml file and it has created the one EC2 instance in all the public subnets that we have created in Part-1 of this series.


EC2 instance registration got failed inside the ELB because ELB checked port 80 and EC2 instance didn’t have any service that listened on this port, to overcome this problem, we have installed the nginx during EC2 creation by passing the user data inside the ec2instance role:
user_data: | #!/bin/sh sudo apt-get install nginx -y


You may have noticed that we have enabled the SSL Certificate on ELB, if you have valid SSL certificate that’s really good else use these steps to generate Self-signed SSL Certificate to use with ELB for testing purpose:
openssl genrsa -des3 -out tendo.org.key 1024 openssl req -nodes -newkey rsa:2048 -keyout tendo.org.key -out tendo.org.csr cp tendo.org.key tendo.org.key.org openssl rsa -in tendo.org.key.org -out tendo.org.key openssl x509 -req -days 365 -in tendo.org.csr -signkey tendo.org.key -out tendo.org.crt openssl rsa -in tendo.org.key -outform PEM > tendokey.pem openssl x509 -inform PEM -in tendo.org.crt > tendo.crt
Upload the SSL Certificate on AWS using awscli:
aws iam upload-server-certificate --server-certificate-name tendo-crt
--certificate-body file://tendo.crt --private-key file://tendokey.pem
Once uploaded successfully, get it’s arn using this command:
aws iam list-server-certificates
After finish with your testing, delete the SSL Certificate using this command:
aws iam delete-server-certificate --server-certificate-name tendo-crt
Extra Info: I have written a simple filter plugin to find information about ec2 instances like id and ip address.
– EC2 instances ids are needed to add them inside the ELB
– EC2 instance ip addresses are needed to add them inside the inventory
| from jinja2.utils import soft_unicode | |
| ''' | |
| USAGE: | |
| - debug: | |
| msg: "{{ ec2.results | get_ec2_info('id') }}" | |
| Some useful ec2 keys: | |
| id | |
| dns_name | |
| public_ip | |
| private_ip | |
| ''' | |
| class FilterModule(object): | |
| def filters(self): | |
| return { | |
| 'get_ec2_info': get_ec2_info, | |
| } | |
| def get_ec2_info(list, ec2_key): | |
| ec2_info = [] | |
| for item in list: | |
| for ec2 in item['instances']: | |
| ec2_info.append(ec2[ec2_key]) | |
| return ec2_info |
Enjoy ![]()
Hope this will help you!
Please Remember me in your prayers!
In next post, we’ll create the RDS Instance using these resources.
[contact-form]
]]>