Kubernetes cluster, managed by AWS EKS, created with Terraform
In this post, we’ll walk through deploying an Elastic Kubernetes Service (EKS) cluster on Amazon Web Services (AWS) using Terraform.
The associated code is available here
Prerequisites
Before we start, ensure you have the following:
- An AWS account with necessary permissions to create EKS clusters, IAM roles, VPCs, and other related resources.
- Terraform installed on your local machine.
- AWS CLI configured with your credentials.
For these requirements, you can follow this guide
- kubectl to manage the EKS cluster. For that, you can follow the official guide
Overview of the repository
Here’s a high-level view of the repository structure:
aws_tf_eks_mario/
├── providers.tf
├── variables.tf
├── outputs.tf
├── vpc.tf
└── main.tf
Each of these files has a specific role in the Terraform configuration:
providers.tf
: Configures the necessary providers.variables.tf
: Defines the input variables for the Terraform script.outputs.tf
: Specifies output values.vpc.tf
: Contains the configuration for the Virtual Private Cloud (VPC) that will house the EKS cluster.main.tf
: Includes the configuration for the EKS cluster and its associated resources.
Configuration
1. Variables
Create a terraform.tfvars
file where you can tailor the cluster according to your requirements. The two most crucial variables to customize are the region where you want to deploy the infrastructure and the AWS profile that you previously defined, which should have the necessary permissions.
region = "eu-west-1"
profile = "p-dev"
The variables.tf
file contains all the customizable parameters with default values, providing a comprehensive overview of the available options.
variable "region" {
description = "The AWS region to deploy resources"
type = string
default = "eu-west-1"
}
variable "profile" {
description = "The AWS CLI profile to use"
type = string
default = "p_lambda_deployer"
}
variable "cluster_name" {
description = "Name of the EKS cluster being created"
type = string
default = "aws-eks-easy-cluster-tf"
}
variable "cluster_version" {
description = "Version of the EKS cluster being created"
type = string
default = "1.29"
}
variable "cluster_node_size" {
description = "The size of the nodes to provision"
type = string
default = "t2.micro"
}
variable "cluster_node_groups_default" {
description = "Default configuration for the EKS node groups"
type = object({
instance_type = string
ami_type = string
})
default = {
instance_type = "t2.micro"
ami_type = "AL2_x86_64"
}
}
variable "cluster_node_groups" {
description = "Configurations for EKS managed node groups"
type = map(object({
name = string
min_size = number
max_size = number
desired_size = number
}))
default = {
"worker_group_mgmt_one" : {
name = "worker-group-1"
min_size = 1
max_size = 3
desired_size = 2
},
"worker_group_mgmt_two" : {
name = "worker-group-2"
min_size = 1
max_size = 3
desired_size = 1
}
}
}
2. Providers
In providers.tf
, configure the AWS provider with the region and profile
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.51.1"
}
}
}
provider "aws" {
region = var.region
profile = var.profile
}
3. VPC
In vpc.tf
, define the Virtual Private Cloud (VPC) and its components:
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.1"
name = "aws-eks-easy-cluster-tf-vpc"
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
Name = "aws-eks-easy-cluster-tf-vpc"
Terraform = "true"
Environment = "dev"
}
public_subnet_tags = {
Name = "aws-eks-easy-cluster-tf-public-subnet"
}
private_subnet_tags = {
Name = "aws-eks-easy-cluster-tf-private-subnet"
}
}
This code creates a VPC with a specified CIDR block and three subnets spread across different availability zones, using the certified Terraform module terraform-aws-modules/vpc/aws
.
4. Main
In main.tf
, define the EKS cluster and node groups:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.21.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_group_defaults = var.cluster_node_groups_default
eks_managed_node_groups = var.cluster_node_groups
}
This configuration defines two worker node groups using the t2.micro
EC2 instance type, leveraging the certified Terraform module terraform-aws-modules/eks/aws
.
5. Output
In outputs.tf
, define outputs to provide useful information post-deployment:
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane"
value = module.eks.cluster_security_group_id
}
Deployment
Deploy the cluster with Terraform in few commands:
terraform init
terraform plan
terraform apply --auto-approve
Verification
- Configure kubectl to access to the k8s cluster
aws eks --profile p-dev --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
- Verify the cluster
kubectl cluster-info
- Verify the worker nodes
kubectl get nodes
Destruction
To destroy the deployed cluster, use the following command:
terraform destroy --auto-approve