Kubernetes Cluster on Multi-Cloud using Terraform and Ansible

Kubernetes

Kubernetes is an open-source container orchestration platform that automates various processes involved in deploying, managing, and scaling containerized applications.

Kubernetes Cluster

A Kubernetes cluster is a set of nodes that runs containerized applications. Kubernetes cluster allows the application to run across multiple machines and environments.

https://www.linkedin.com/pulse/configure-kubernetes-cluster-using-ansible-role-aditya-raj/

Terraform

Terraform is an open-source infrastructure as a code tool created by HashiCorp. Terraform allows creating infrastructure with code in a simple human-readable language called HCL(HashiCorp Configuration Language). Terraform reads the configuration file and creates an execution plan which can be executed to launch the infrastructure.

Ansible

Ansible is an open-source IT Configuration Management, Deployment & Orchestration tool. Ansible is very simple to use yet powerful enough to automate complex multi-tier IT application environments. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy.

Prerequisites

Install Terraform:

Install yum-config-manager to manage your repositories.

$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
$ sudo yum -y install terraform

Install Ansible:

For the Ansible Controller node Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) should be installed.

pip3 install ansible

Authenticating to AWS

AWS provider is used to interacting with many resources supported by AWS. The provider needs to be configured with proper credentials before it can be used. We can store our Access Key and Secret Key in Credentials files which live in ~/.aws/credentials. A simple way to create this file is by installing AWS CLI and running the was configure command.

aws configure --profile aditya
# Configure the AWS Provider
provider "aws" {
region = "ap-south-1"
profile = "aditya"
}
https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Authenticating to Azure

The most simple way to authenticate terraform to Azure is by using Azure CLI. Firstly, log in to the Azure CLI using with az login command.

az login
az account list
az account set –subscription =” SUBSCRIPTION_ID"
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli
# Create (and display) an SSH key
resource "tls_private_key" "k8s_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
# Create local key
resource "local_file" "keyfile" {
content = tls_private_key.k8s_ssh.private_key_pem
filename = "terraform_key.pem"
file_permission = "0400"
}
# Terraform AWS provider
provider "aws" {
profile = "aditya"
region = "ap-south-1"
}
# Provides EC2 key pair
resource "aws_key_pair" "terraformkey" {
key_name = "terraform_key"
public_key = tls_private_key.k8s_ssh.public_key_openssh
}
# Create VPC
resource "aws_vpc" "k8s_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames=true
enable_dns_support =true
tags = {
Name = "K8S VPC"
}
}
# Create Subnet
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.k8s_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "ap-south-1a"
tags = {
Name = "Public Subnet"
}
}
# Create Internet Gateway
resource "aws_internet_gateway" "k8s_gw" {
vpc_id = aws_vpc.k8s_vpc.id
tags = {
Name = "K8S GW"
}
}
# Create Routing table
resource "aws_route_table" "k8s_route" {
vpc_id = aws_vpc.k8s_vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.k8s_gw.id
}

tags = {
Name = "K8S Route"
}
}
# Associate Routing table
resource "aws_route_table_association" "k8s_asso" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.k8s_route.id
}
# Create security group
resource "aws_security_group" "allow_ssh_http" {
name = "Web_SG"
description = "Allow SSH and HTTP inbound traffic"
vpc_id = aws_vpc.k8s_vpc.id
ingress {
description = "Allow All"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "Allow All"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "K8S SG"
}
}
# Launch EC2 instnace for Master Node
resource "aws_instance" "k8s" {
ami = "ami-010aff33ed5991201"
instance_type = "t2.micro"
key_name = aws_key_pair.terraformkey.key_name
associate_public_ip_address = true
subnet_id = aws_subnet.public_subnet.id
vpc_security_group_ids = [ aws_security_group.allow_ssh_http.id ]
tags = {
Name = "Master Node"
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {

features {}
}
variable "n" {
type = number
description = "No. of Worker Node"
}
# Create a resource group if it doesn't exist
resource "azurerm_resource_group" "myk8sgroup" {
name = "k8sResourceGroup"
location = "Central India"
tags = {
environment = "K8s Resource Group"
}
}
# Create virtual network
resource "azurerm_virtual_network" "myk8snetwork" {
name = "k8sVnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
tags = {
environment = "K8s VN"
}
}
# Create subnet
resource "azurerm_subnet" "myk8ssubnet" {
name = "k8sSubnet"
resource_group_name = azurerm_resource_group.myk8sgroup.name
virtual_network_name = azurerm_virtual_network.myk8snetwork.name
address_prefixes = ["10.0.1.0/24"]
}
# Create public IPs
resource "azurerm_public_ip" "myk8spublicip" {
count = var.n
name = "k8sPublicIP${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
allocation_method = "Dynamic"
tags = {
environment = "K8s Public IP"
}
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "myk8snsg" {
name = "k8sNetworkSecurityGroup"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "K8s Security Group"
}
}
# Create network interface
resource "azurerm_network_interface" "myk8snic" {
count = var.n
name = "k8sNIC${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
ip_configuration {
name = "myNicConfiguration${count.index}"
subnet_id = azurerm_subnet.myk8ssubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${element(azurerm_public_ip.myk8spublicip.*.id, count.index)}"
}
tags = {
environment = "K8s NIC"
}
}
# Connect the security group to the network interface
resource "azurerm_network_interface_security_group_association" "myk8ssga" {
count = 2
network_interface_id = "${element(azurerm_network_interface.myk8snic.*.id, count.index)}"
network_security_group_id = azurerm_network_security_group.myk8snsg.id
}
# Create virtual machine for Worker node
resource "azurerm_linux_virtual_machine" "myk8svm" {
count = var.n
name = "k8sVM${count.index}"
location = azurerm_resource_group.myk8sgroup.location
resource_group_name = azurerm_resource_group.myk8sgroup.name
network_interface_ids = ["${element(azurerm_network_interface.myk8snic.*.id, count.index)}"]
size = "Standard_DS1_v2"
os_disk {
name = "myOsDisk${count.index}"
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "RedHat"
offer = "RHEL"
sku = "8.2"
version = "latest"
}
computer_name = "WorkerNode${count.index}"
admin_username = "ansible"
disable_password_authentication = true
admin_ssh_key {
username = "ansible"
public_key = tls_private_key.k8s_ssh.public_key_openssh
}
tags = {
environment = "Worker_Node"
}
}
# Update Ansible inventory
resource "local_file" "ansible_host" {
depends_on = [
aws_instance.k8s
]
count = var.n
content = "[Master_Node]\n${aws_instance.k8s.public_ip}\n\n[Worker_Node]\n${join("\n", azurerm_linux_virtual_machine.myk8svm.*.public_ip_address)}"
filename = "inventory"
}
# Run Ansible playbook
resource "null_resource" "null1" {
depends_on = [
local_file.ansible_host
]
provisioner "local-exec" {
command = "sleep 60"
}
provisioner "local-exec" {
command = "ansible-playbook playbook.yml"
}
}# Print K8s Master and Worker node IP
output "Master_Node_IP" {
value = aws_instance.k8s.public_ip}
output "Worker_Node_IP" { value = join(", ", azurerm_linux_virtual_machine.myk8svm.*.public_ip_address)
}
- name: Configure K8s Master Node
hosts: Master_Node
remote_user: ec2-user
roles:
- role: kubernetes_master
- name: Configure K8s Worker Node
hosts: Worker_Node
remote_user: ansible
roles:
- role: kubernetes_worker
🔴 GitHub Links:✔️ kubernetes_master: https://lnkd.in/eg-v_36✔️ kubernetes_worker: https://lnkd.in/eqBVTbH🔴 Ansible Galaxy Links:✔️ kubernetes_master: https://lnkd.in/eBwQ928
✔️ kubernetes_worker: https://lnkd.in/eJJFAQn

📍Note:

The above Ansible role is created for configuring the K8s cluster in a private network. But here we are configuring the K8s cluster over multi-cloud so we need to use the Public IP of the master node while starting the Kubernetes master which — control-plane endpoint.

kubeadm init --control-plane-endpoint {{ groups['Master_Node'][0] }}:6443 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

Execute the terraform code:

First, we have to initialize the working directory which contains terraform configuration files so that terraform plugin for the respective provider could be installed.

terraform init
terraform plan
terraform apply

Infrastructure created by Terraform in AWS:

Infrastructure created by Terraform in Azure:

terraform destroy --auto-approve

Thank You for reading!! 😇😇

I'm passionate learner diving into the concepts of computing 💻