Terraform module to setup all resources needed for setting up an AWS Elasticsearch Service domain.
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| cognito_enabled | Whether to enable Cognito for authentication in Kibana | bool | false |
no |
| cognito_identity_pool_id | Rrequired when cognito_enabled is enabled: ID of the Cognito Identity Pool to use | string | null |
no |
| cognito_role_arn | Required when cognito_enabled is enabled: ARN of the IAM role that has the AmazonESCognitoAccess policy attached | string | null |
no |
| cognito_user_pool_id | Required when cognito_enabled is enabled: ID of the Cognito User Pool to use | string | null |
no |
| dedicated_master_count | Number of dedicated master nodes in the domain | number | 1 |
no |
| dedicated_master_enabled | Whether dedicated master nodes are enabled for the domain | bool | false |
no |
| dedicated_master_type | Instance type of the dedicated master nodes in the domain | string | "t2.small.elasticsearch" |
no |
| disable_encrypt_at_rest | Whether to force-disable encryption at rest, overriding the default to enable encryption if it is supported by the chosen instance_type. If you want to keep encryption disabled on an instance_type that is compatible with encryption you need to set this parameter to true. This is especially important for existing Amazon ES clusters, since enabling/disabling encryption at rest will destroy your cluster! |
bool | false |
no |
| elasticsearch_version | Version of the Elasticsearch domain | string | "6.7" |
no |
| environment | Environment name | string | n/a | yes |
| instance_count | Size of the Elasticsearch domain | number | 1 |
no |
| instance_type | Instance type to use for the Elasticsearch domain | string | "t2.small.elasticsearch" |
no |
| application_logging_enabled | Whether to enable Elasticsearch application logs in Cloudwatch | bool | false |
no |
| logging_enabled | Whether to enable Elasticsearch slow logs in Cloudwatch | bool | false |
no |
| logging_retention | How many days to retain Elasticsearch logs in Cloudwatch | number | 30 |
no |
| name | Name to use for the Elasticsearch domain | string | n/a | yes |
| options_indices_fielddata_cache_size | Sets the indices.fielddata.cache.size advanced option. Specifies the percentage of heap space that is allocated to fielddata |
number | null |
no |
| options_indices_query_bool_max_clause_count | Sets the indices.query.bool.max_clause_count advanced option. Specifies the maximum number of allowed boolean clauses in a query |
number | 1024 |
no |
| options_rest_action_multi_allow_explicit_index | Sets the rest.action.multi.allow_explicit_index advanced option. When set to false, Elasticsearch will reject requests that have an explicit index specified in the request body |
bool | true |
no |
| project | Project name | string | n/a | yes |
| security_group_ids | Extra security group IDs to attach to the Elasticsearch domain. Note: a default SG is already created and exposed via outputs | list(string) | [] |
no |
| snapshot_bucket_enabled | Whether to create a bucket for custom Elasticsearch backups (other than the default daily one) | string | "false" |
no |
| snapshot_start_hour | Int(optional, 3): Hour during which an automated daily snapshot is taken of the Elasticsearch indices | number | 3 |
no |
| subnet_ids | Required if vpc_id is specified: Subnet IDs for the VPC enabled Elasticsearch domain endpoints to be created in | list(string) | [] |
no |
| tags | Optional tags | map | {} |
no |
| volume_iops | Required if volume_type="io1": Amount of provisioned IOPS for the EBS volume | number | 0 |
no |
| volume_size | EBS volume size (in GB) to use for the Elasticsearch domain | number | n/a | yes |
| volume_type | EBS volume type to use for the Elasticsearch domain | string | "gp2" |
no |
| vpc_id | VPC ID where to deploy the Elasticsearch domain. If set, you also need to specify subnet_ids. If not set, the module creates a public domain |
string | null |
no |
| zone_awareness_enabled | Whether to enable zone_awareness or not | bool | false |
no |
| Name | Description |
|---|---|
| arn | ARN of the Elasticsearch domain |
| domain_id | ID of the Elasticsearch domain |
| domain_name | Name of the Elasticsearch domain |
| domain_region | Region of the Elasticsearch domain |
| endpoint | DNS endpoint of the Elasticsearch domain |
| role_arn | ARN of the IAM role (eg to attach to an instance or user) allowing access to the Elasticsearch snapshot bucket |
| role_id | ID of the IAM role (eg to attach to an instance or user) allowing access to the Elasticsearch snapshot bucket |
| sg_id | ID of the Elasticsearch security group |
module "elasticsearch" {
source = "github.com/skyscrapers/terraform-awselasticsearch//elasticsearch?ref=4.0.0"
name = "es"
project = var.project
environment = terraform.workspace
instance_count = 3
instance_type = "m5.large.elasticsearch"
volume_size = 100
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
subnet_ids = data.terraform_remote_state.networking.outputs.private_db_subnets
}
resource "aws_elasticsearch_domain_policy" "es_policy" {
domain_name = "${module.elasticsearch.domain_name}"
access_policies = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": [
"AWS": "${aws_iam_user.es_user.arn}"
],
"Effect": "Allow",
"Resource": "${module.elasticsearch.arn}/*"
}
]
}
POLICY
}The AWS Elasticsearch Service handles backups automatically via daily snapshots. You can control when this happens by setting snapshot_start_hour.
It's possible to create a custom backup schedule by using the normal Elasticsearch API for snapshotting. This module can create an S3 bucket and IAM role allowing such scenario's (snapshot_bucket_enabled = true). More info on how to create custom snapshots can be found in the AWS documentation.
This module by default creates Cloudwatch Log Groups & IAM permissions for ElasticSearch slow logging, but we don't enable these logs by default. You can control logging behavior via the logging_enabled and logging_retention parameters. When enabling this, make sure you also enable this on Elasticsearch side, following the AWS documentation.
This module generates a Helm values file which can be used for the elasticsearch/monitoring chart.
The file, helm_values.yaml needs to be created in the same folder as the Terraform code that is calling this module.
This module will not work without the ES default role AWSServiceRoleForAmazonElasticsearchService. This service role needs to be created per-account so you will need to add it if not present (just once per AWS account).
Here is a code sample you can use:
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
}This module deploys our elasticsearch/monitoring chart on Kubernetes.
| Name | Description | Type | Default | Required |
|---|
| elasticsearch_monitoring_chart_version | elasticsearch-monitoring Helm chart version to deploy | string | "0.2.5" | no |
| elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string | n/a | yes |
| elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string | n/a | yes |
| elasticsearch_domain_region | Region of the AWS Elasticsearch domain | string | n/a | yes |
| kubernetes_namespace | Kubernetes namespace where to deploy the skyscrapers/elasticsearch-monitoring chart | string | n/a | yes |
| kubernetes_worker_instance_role_arns | Role ARNs of the Kubernetes nodes to attach the kube2iam assume_role to | list(string) | n/a | yes |
| force_helm_update | Modify this variable to trigger an update on all Helm charts (you can set any value). Due to current limitations of the Helm provider, it doesn't detect drift on | string | "1" | no |
This module deploys an Ingress with external authentication on Kubernetes to reach the AWS Elasticsearch Kibana endpoint.
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string | n/a | yes |
| elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string | n/a | yes |
| kubernetes_namespace | Kubernetes namespace where to deploy the Ingress | string | n/a | yes |
| ingress_host | Hostname to use for the Ingress | string | n/a | yes |
| ingress_auth_url | Value to set for the nginx.ingress.kubernetes.io/auth-url annotation |
string | n/a | yes |
| ingress_auth_signin | Value to set for the nginx.ingress.kubernetes.io/auth-signin annotation |
string | n/a | yes |
| ingress_auth_configuration_snippet | Value to set for the nginx.ingress.kubernetes.io/configuration-snippet annotation |
string | null |
no |
This module deploys keycloack-gatekeeper as OIDC proxy on Kubernetes to reach the AWS Elasticsearch Kibana endpoint.
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string | n/a | yes |
| elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string | n/a | yes |
| kubernetes_namespace | Kubernetes namespace where to deploy the Keycloack-gatekeeper proxy chart | string | n/a | yes |
| gatekeeper_image | Docker image to use for the Keycloack-gatekeeper deployment | string | "keycloak/keycloak-gatekeeper:6.0.1" |
no |
| gatekeeper_ingress_host | Hostname to use for the Ingress | string | n/a | yes |
| gatekeeper_discovery_url | URL for OpenID autoconfiguration | string | n/a | yes |
| gatekeeper_client_id | Client ID for OpenID server | string | n/a | yes |
| gatekeeper_client_secret | Client secret for OpenID server | string | n/a | yes |
| gatekeeper_oidc_groups | Groups that will be granted access. When using Dex with GitHub, teams are defined in the form <gh_org>:<gh_team>, for example skyscrapers:k8s-admins |
string | n/a | yes |
| gatekeeper_timeout | Upstream timeouts to use for the proxy | string | "500s" |
no |
| gatekeeper_extra_args | Additional keycloack-gatekeeper command line arguments | list(string) | [] |
no |
| Name | Description |
|---|---|
| callback_uri | Callback URI. You might need to register this to your OIDC provider (like CoreOS Dex) |