Hi, devops fans. At 1st part we get acquainted with physical file’s structure of OpenSearch terrafrom module. We also discussed in deep AWS IAM service linked role for elasticsearch and which OpenSearch version should we use. At current article we will concentrate at OpenSearch cluster configuration.
cluster_config {
instance_type = var.instance_type
instance_count = var.instance_count
zone_awareness_enabled = true
zone_awareness_config {
availability_zone_count = var.az_count
}
}
So, we define instance type and count – which are set from variables. We also want our cluster to be spread over multiple zones – so we provide zone information to get HA. I have to make an accent here – that is base cluster configuration. In that scenario we assume that node will perform both – data and master role at the same time. And it would be enough not only for course purposes, but even for the most production cases, though sure – not for all. Moreover – that is not what AWS recommends.
You will find many articles and many notes at AWS documentation where dedicated master nodes are promoted. It is easy to come to the conclusion that it is the only available architecture solution at all. Let’s clear that situation. We may visit the next AWS documentation page. The main idea here is to have separate nodes for cluster management and separate ones for keeping the data.
And now several personal essential recommendations – Attention, please:
- If anything you do on the cluster is so heavy that, because of the indexing/querying operations, it can bring down one node and that node is a master, then yes I would recommend having a dedicated master node architecture.
- For larger clusters (we are speaking here about more than 10 nodes) this is probably a must case. But for all other cases, you can go without a dedicated master.
You may ask then – so why AWS promotes dedicated masters then? The answer is rather obvious – that is the money, they want you to pay more. But you, as good DevOps, need to have your own head and choose optimal and cost effective solutions.
Ok, you may then ask: “But I need dedicated nodes – how to get it using terrafrom?” It is not so difficult to realize it. Please, have a look at opensearch terraform documentation -> cluster config section. Here you see that we can enable dedicated master nodes and define the number of such nodes. Though here you have to remember also that in real life master nodes don’t require as much power as data nodes – so it is better to have different instance types for different node types. So, as a possible terrafrom solution, you may have 2 terrafrom modules – 1st one can deploy a cluster using dedicated master nodes, the second one can attach data nodes to existing cluster. I am not going to show how to do it exactly – It is a little bit outside the limits of the current article. But if you are interested about it, please write to me at email – in case I see a big demand from the reader’s side – then I will write a separate article on how to deploy OpenSearch cluster using separate master and data nodes.
Ok, let’s go further via out terrafrom module. After having a cluster configuration section we define VPC options – our subnets and security groups. And again, we are going to pass it as a variables. Current variables would be passed from S3 remote state – to get that variables at state, you have to run Terraform network module at first.
vpc_options {
subnet_ids = slice(var.subnets[*].id, 0, var.az_count)
security_group_ids = var.sg[*].id
}
It is very good that AWS gives us backup possibilities out of the box. It is enough for us to only define which time we want to take snapshots.
snapshot_options {
automated_snapshot_start_hour = var.automated_snapshot_start_hour
}
We also want to gather logs from our cluster to observe it at AWS CloudWatch. Let’s Open terraform/modules/opensearch/logs.tf:
resource "aws_cloudwatch_log_group" "opensearch" {
name = "${var.domain_name}"
}
resource "aws_cloudwatch_log_resource_policy" "opensearch" {
policy_name = "opensearch-logs"
policy_document = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream"
],
"Resource": "arn:aws:logs:*"
}
]
}
CONFIG
}
Here we create OpenSearch CloudWatch log group and give permissions for cluster to write logs at is. Then, at main es.tf file, we define which logs we want to gather at log group and give permissions for cluster to write logs at it.
log_publishing_options {
cloudwatch_log_group_arn = aws_cloudwatch_log_group.opensearch.arn
enabled = true
log_type = "ES_APPLICATION_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = aws_cloudwatch_log_group.opensearch.arn
enabled = true
log_type = "SEARCH_SLOW_LOGS"
}
Pay attention here also to the slow logs section – current logs are not gathered by default – But I recommend turning it on. It is also good to have encryption between cluster nodes.
node_to_node_encryption {
enabled = true
}
Here is a small example of advanced options sections:
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
I added it here in order to pay your attention to that one property. It is already set to true from default, but it is quite possible that you would like to turn it off for security reasons. Please, read more about it here “Identity and access management” -> section “Advanced options and API considerations”.
OK, great. So we finished with cluster configuration, vpc, network, security group settings. We also know how to deal with logging, node’s encryption, snapshots and how to add advanced configuration options. At next, third lecture, we will speak about storage – where and how to keep our data. Then I will show you how to add CloudWatch alarms for essential metrics. We will also pass over terraform locals, variables and prepare all stuff required to apply OpenSearch terraform module. Hope to see you soon. Please, check my blog regularly or simply subscribe to my newsletter. Alternatively, you may pass all material at once in convenient and fast way at my on-line course at udemy. Below is the link to the course. As the reader of that blog you are also getting possibility to use coupon for the best possible low price.