Kubernetes Cluster on Multi-Cloud using Terraform and Ansible

Aditya Raj
10 min readMay 27, 2021

Kubernetes

Kubernetes is an open-source container orchestration platform that automates various processes involved in deploying, managing, and scaling containerized applications.

Kubernetes cluster can be set up across on-premise, public, private or hybrid clouds. This is why Kubernetes is an ideal platform for hosting applications that require rapid scaling.

Kubernetes Cluster

A Kubernetes cluster is a set of nodes that runs containerized applications. Kubernetes cluster allows the application to run across multiple machines and environments.

Kubernetes cluster consists of one master node and several worker nodes. These nodes can be physical computers or virtual machines. The master node controls the state of the cluster, and the master node is the one that assigns tasks to the worker node. The worker nodes are the components that run these applications. Worker nodes perform tasks assigned by the master node.

We can use automation tools like Ansible or Puppet to automate the configuration of the K8s cluster. If you want to know how we can configure the K8s cluster using Ansible then you can refer to my Article: Configure Kubernetes cluster using Ansible Role Link is mentioned below: 👇 👇

https://www.linkedin.com/pulse/configure-kubernetes-cluster-using-ansible-role-aditya-raj/

In this article, I am going to configure the K8s cluster on multi-cloud(AWS & AZURE) using Terraform and Ansible. So let’s first know something about Terraform and Ansible.

Terraform

Terraform is an open-source infrastructure as a code tool created by HashiCorp. Terraform allows creating infrastructure with code in a simple human-readable language called HCL(HashiCorp Configuration Language). Terraform reads the configuration file and creates an execution plan which can be executed to launch the infrastructure.

Ansible

Ansible is an open-source IT Configuration Management, Deployment & Orchestration tool. Ansible is very simple to use yet powerful enough to automate complex multi-tier IT application environments. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy.

For this practical, I will use Red Hat 8 Operating System as a workstation so Terraform and Ansible should be installed on it.

Prerequisites

Install Terraform:

Install yum-config-manager to manage your repositories.

$ sudo yum install -y yum-utils

Use yum-config-manager to add the official HashiCorp Linux repository.

$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

Install Terraform.

$ sudo yum -y install terraform

Install Ansible:

For the Ansible Controller node Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) should be installed.

Ansible can be installed on Red Hat 8 with pip, the Python package manager.

pip3 install ansible

Authenticating to AWS

AWS provider is used to interacting with many resources supported by AWS. The provider needs to be configured with proper credentials before it can be used. We can store our Access Key and Secret Key in Credentials files which live in ~/.aws/credentials. A simple way to create this file is by installing AWS CLI and running the was configure command.

aws configure --profile aditya

Now we can use the Provider block to configure Terraform to use the profile defined in the credentials file.

# Configure the AWS Provider
provider "aws" {
region = "ap-south-1"
profile = "aditya"
}

For more information, you can refer to the documentation: 👇 👇

https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Authenticating to Azure

The most simple way to authenticate terraform to Azure is by using Azure CLI. Firstly, log in to the Azure CLI using with az login command.

az login

Once logged in then we can list the subscriptions if there is more than one subscription with the account with the az account list command.

az account list

Now set the subscription which we want terraform to use az account set –subscription =” SUBSCRIPTION_ID”.

az account set –subscription =” SUBSCRIPTION_ID"

Now we can use the Provider block to configure Terraform to use the Default Subscription defined in the Azure CLI

# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}

For more information, you can refer to the documentation: 👇 👇

https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli

I am going to create a separate terraform file for launching infrastructure for Kubernetes Master node over AWS and Kubernetes Worker Node over Azure. But before this let’s create an ssh key for doing ssh to Master node and Worker Node.

I am creating key.tf file to create ssh key and then save the key locally with teffaform_key.pem filename.

# Create (and display) an SSH key
resource "tls_private_key" "k8s_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
# Create local key
resource "local_file" "keyfile" {
content = tls_private_key.k8s_ssh.private_key_pem
filename = "terraform_key.pem"
file_permission = "0400"
}

Now I am creating aws.tf file which will create an infrastructure with VPC, Subnet, Internet Gateway, routing table, Security group, and then finally launch the instance for Kubernetes Master node.

# Terraform AWS provider
provider "aws" {
profile = "aditya"
region = "ap-south-1"
}
# Provides EC2 key pair
resource "aws_key_pair" "terraformkey" {
key_name = "terraform_key"
public_key = tls_private_key.k8s_ssh.public_key_openssh
}
# Create VPC
resource "aws_vpc" "k8s_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames=true
enable_dns_support =true
tags = {
Name = "K8S VPC"
}
}
# Create Subnet
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.k8s_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "ap-south-1a"
tags = {
Name = "Public Subnet"
}
}
# Create Internet Gateway
resource "aws_internet_gateway" "k8s_gw" {
vpc_id = aws_vpc.k8s_vpc.id
tags = {
Name = "K8S GW"
}
}
# Create Routing table
resource "aws_route_table" "k8s_route" {
vpc_id = aws_vpc.k8s_vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.k8s_gw.id
}

tags = {
Name = "K8S Route"
}
}
# Associate Routing table
resource "aws_route_table_association" "k8s_asso" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.k8s_route.id
}
# Create security group
resource "aws_security_group" "allow_ssh_http" {
name = "Web_SG"
description = "Allow SSH and HTTP inbound traffic"
vpc_id = aws_vpc.k8s_vpc.id
ingress {
description = "Allow All"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "Allow All"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "K8S SG"
}
}
# Launch EC2 instnace for Master Node
resource "aws_instance" "k8s" {
ami = "ami-010aff33ed5991201"
instance_type = "t2.micro"
key_name = aws_key_pair.terraformkey.key_name
associate_public_ip_address = true
subnet_id = aws_subnet.public_subnet.id
vpc_security_group_ids = [ aws_security_group.allow_ssh_http.id ]
tags = {
Name = "Master Node"
}
}

Create azure.tf file to create Resource group, Virtual Network, Subnet, Network security group, Network interface and launch Virtual Machines for Kubernetes Worker node.

# Configure the Microsoft Azure Provider
provider "azurerm" {

features {}
}
variable "n" {
type = number
description = "No. of Worker Node"
}
# Create a resource group if it doesn't exist
resource "azurerm_resource_group" "myk8sgroup" {
name = "k8sResourceGroup"
location = "Central India"
tags = {
environment = "K8s Resource Group"
}
}
# Create virtual network
resource "azurerm_virtual_network" "myk8snetwork" {
name = "k8sVnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
tags = {
environment = "K8s VN"
}
}
# Create subnet
resource "azurerm_subnet" "myk8ssubnet" {
name = "k8sSubnet"
resource_group_name = azurerm_resource_group.myk8sgroup.name
virtual_network_name = azurerm_virtual_network.myk8snetwork.name
address_prefixes = ["10.0.1.0/24"]
}
# Create public IPs
resource "azurerm_public_ip" "myk8spublicip" {
count = var.n
name = "k8sPublicIP${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
allocation_method = "Dynamic"
tags = {
environment = "K8s Public IP"
}
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "myk8snsg" {
name = "k8sNetworkSecurityGroup"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "K8s Security Group"
}
}
# Create network interface
resource "azurerm_network_interface" "myk8snic" {
count = var.n
name = "k8sNIC${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
ip_configuration {
name = "myNicConfiguration${count.index}"
subnet_id = azurerm_subnet.myk8ssubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${element(azurerm_public_ip.myk8spublicip.*.id, count.index)}"
}
tags = {
environment = "K8s NIC"
}
}
# Connect the security group to the network interface
resource "azurerm_network_interface_security_group_association" "myk8ssga" {
count = 2
network_interface_id = "${element(azurerm_network_interface.myk8snic.*.id, count.index)}"
network_security_group_id = azurerm_network_security_group.myk8snsg.id
}
# Create virtual machine for Worker node
resource "azurerm_linux_virtual_machine" "myk8svm" {
count = var.n
name = "k8sVM${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
network_interface_ids = ["${element(azurerm_network_interface.myk8snic.*.id, count.index)}"]
size = "Standard_DS1_v2"
os_disk {
name = "myOsDisk${count.index}"
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "RedHat"
offer = "RHEL"
sku = "8.2"
version = "latest"
}
computer_name = "WorkerNode${count.index}"
admin_username = "ansible"
disable_password_authentication = true
admin_ssh_key {
username = "ansible"
public_key = tls_private_key.k8s_ssh.public_key_openssh
}
tags = {
environment = "Worker_Node"
}
}

Now I am going to create another terraform file ansible.tf which will update the inventory of ansible with the IP Address of the Kubernetes Master node and Kubernetes Worker node. Then I have used local-exec provisioned to run the ansible-playbook which will use the Ansible Role to configure these instances and Kubernetes Master node and Kubernetes Worker Node.

# Update Ansible inventory
resource "local_file" "ansible_host" {
depends_on = [
aws_instance.k8s
]
count = var.n
content = "[Master_Node]\n${aws_instance.k8s.public_ip}\n\n[Worker_Node]\n${join("\n", azurerm_linux_virtual_machine.myk8svm.*.public_ip_address)}"
filename = "inventory"
}
# Run Ansible playbook
resource "null_resource" "null1" {
depends_on = [
local_file.ansible_host
]
provisioner "local-exec" {
command = "sleep 60"
}
provisioner "local-exec" {
command = "ansible-playbook playbook.yml"
}
}# Print K8s Master and Worker node IP
output "Master_Node_IP" {
value = aws_instance.k8s.public_ip}
output "Worker_Node_IP" { value = join(", ", azurerm_linux_virtual_machine.myk8svm.*.public_ip_address)
}

playbook.yml file contains the playbook to run the Ansible Role which will configure instances on AWS and Azure as Kubernetes master and worker node.

- name: Configure K8s Master Node
hosts: Master_Node
remote_user: ec2-user
roles:
- role: kubernetes_master
- name: Configure K8s Worker Node
hosts: Worker_Node
remote_user: ansible
roles:
- role: kubernetes_worker

I already have an Ansible role to configure Kubernetes master and worker node created, you can download and use it from Ansible galaxy. The link is mentioned below: 👇 👇

🔴 GitHub Links:✔️ kubernetes_master: https://lnkd.in/eg-v_36✔️ kubernetes_worker: https://lnkd.in/eqBVTbH🔴 Ansible Galaxy Links:✔️ kubernetes_master: https://lnkd.in/eBwQ928
✔️ kubernetes_worker: https://lnkd.in/eJJFAQn

📍Note:

The above Ansible role is created for configuring the K8s cluster in a private network. But here we are configuring the K8s cluster over multi-cloud so we need to use the Public IP of the master node while starting the Kubernetes master which — control-plane endpoint.

kubeadm init --control-plane-endpoint {{ groups['Master_Node'][0] }}:6443 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

In my case, all terraform files and ansible files are in the same folder which is the working directory. Also terraform will dynamically create ssh key and inventory file in the same directory which will be used by Ansible to configure the Kubernetes cluster over AWS and Azure.

Execute the terraform code:

First, we have to initialize the working directory which contains terraform configuration files so that terraform plugin for the respective provider could be installed.

terraform init

Now let’s run the terraform plan command to create an execution plan.

terraform plan

The output is saying that there are 25 resources that to be added.

Now have to execute the actions proposed in a terraform plan and for this, we have to use terraform apply command.

terraform apply

Terraform will run the ansible command to run the playbook with the local-exec provisioner. Ans the playbook will run the role to configure k8s master node and worker node.

Once terraform apply command execution is completed then we can see the recap of the Ansible playbook, total resources added by the Terraform and in output, we can see the IP Address of K8s Master node and worker node. Also, the inventory is created by Terraform which Ansible can use.

We can see all the resources which are added by the Terraform and also we can check if our k8s cluster is up or not.

Infrastructure created by Terraform in AWS:

Infrastructure created by Terraform in Azure:

Now I will login to the K8s Master node instance which is running in AWS to check if the Kubernetes cluster is up or not.

Finally, Kubernetes multi-node cluster is up where the Kubernetes master node is running on AWS and Kubernetes worker node is running on Azure. We can deploy our containerized applications that require rapid scaling.

Suppose we want to destroy the complete infrastructure then we can use terraform destroy command to destroy the complete infrastructure which is created by Terraform.

terraform destroy --auto-approve

With just one click the complete infrastructure will be destroyed and again we can launch the same with terraform apply command.

You can refer to the complete code on my GitHub.

GitHub Link: https://github.com/adyraj/K8s-cluster-multi-cloud

Thank You for reading!! 😇😇

--

--

Aditya Raj

I'm passionate learner diving into the concepts of computing 💻