JRehkemper.de

K3S HA Kubernetes Cluster Installation on AlmaLinux

Today we are going to install an highly available K3S Kubernetes Cluster on three AlmaLinux Servers.

This Article is the WriteUp for the this Video:
K3S HA Kubernetes Cluster Installation on AlmaLinux

What is K3S?

K3S is a lightweight Kubernets Distribution developed by the SUSE and Rancher Team and is geared towards Edge, IoT and ARM deployments. That makes it perfect for smaller clusters on not so beefy hardware.

We will use it do deploy a new cluster with an embedded etcd-database for high availability. That way if one of the server should fail, the cluster will resume operations, since all the data in the etcd-database is replicated between the nodes. All my nodes will work as control-plane and agent at the same time. That is fine for small clusters and development environments, but if you want to deploy bigger, production-grade clusters you should have dedicated nodes for the control-planes.

Preparation

I started by installing three AlmaLinux 9.2 Servers without any special configuration. After that I installed the latest updates, installed basic tools like vim and gave my user Tux sudo-privileges.
At last I opened the necessary ports in the firewall to enable communication between the nodes.

firewall-cmd --add-port=2379/tcp
firewall-cmd --add-port=2380/tcp
firewall-cmd --add-port=6443/tcp
firewall-cmd --add-port=10250/tcp
firewall-cmd --runtime-to-permanent

Installation Process

K3s provides a handy bash-script to streamline the installation process. You can fetch the script with curl and execute it with the necessary parameters. The script will then download the needed packages and configure the systemd service for you.

Initialize Cluster

Since we are starting from scratch, we need to initialize a new cluster. There for I am connected to the first server and will give the following parameters to the script.

The K3S_TOKEN is a shared secret and will be neede to join the other nodes to the cluster. You can set it to whatever string you like, but you should save it somewhere, if you would like to add additional nodes later down the line.

The server flag tells the script, that this node will be a control-plane. If you would like to add a worker instead, you can replace server with agent.

The --cluster-init will initialize a new cluster. This should only be done on the first node. If you will pass it on the other nodes to, you will get three separate clusters and that is not what we want.

curl -fL https://get.k3s.io | K3S_TOKEN=ILoveLinux sh -s - server --cluster-init

Now you can see how the script is downloading packages and setting up services. After a few seconds your cluster is up and running.

Join the other Nodes

curl -sfL https://get.k3s.io | K3S_TOKEN=ILoveLinux sh -s - server --server https://192.168.124.102:6443

Now we can join the other nodes to the cluster. Therefor we will connect to the second and third server and will run the same script again, but this time with different paramterts.

We have to set the K3S_TOKEN again to prove that we have permission to join these servers.

The server flag is set again, since the other two servers are control-planes as well.

The first difference is the --server flag. It replaces the --cluster-init flag and tells the script to join to an existing cluster, instead of creating a new one. Forthermore we need to provide the url of the first server. You can use DNS-Names if you want, but I will stick to the ip address to save the time needed for the DNS-Lookup and make the cluster independend from a working dns-server. That is generally a good idea when working with clusters.

Configure Kubectl

Now our cluster is set up, but we cannot interact with it yet. Reason being, that the configuration file for the cluster is owned by root and our user Tux may not read it. So we have to change the owner of the configuration file.

sudo chown tux:tux /etc/rancher/k3s/k3s.yaml

Now we can interact with our cluster and check if all three servers are listed as nodes of the cluster.

kubectl get nodes

Basic NGINX Deployment

Let’s create a simple nginx-deployment to give our cluster some work.

First define a Namespace for the deployment. It is recommended to create a namespace for every application in the cluster.
To define the workload, create a yaml file. I will call mine nginx-test.yml.

---

apiVersion: v1
kind: Namespace
metadata:
  name: nginx-test

To create the namespace we need to apply this configuration to our cluster:

kubectl apply -f nginx-test.yml

You can see the new namespace in our cluster.

kubectl get namespaces

Now open the yaml-file again. We will add the deployment next. A deployment describes which containers will be created.

---

apiVersion: v1
kind: Namespace
metadata:
  name: nginx-test

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: nginx-test
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80

Again apply the configuration.

kubectl apply -f nginx-test

Now we can see a nginx-container being created.

kubectl get all -n nginx-test

That is all good, but our webserver lacks scalability. To archive this we need a HorizontalPodAutoscaler. The Autoscaler will monitor the CPU usage of each container and will create more containers, if one of them has an average utilization that passes our threshold. We can also specify the minimum and maximum amount of containers,

If we increase the minimum to something big like 30, we can see how our cluster creates a lot of new containers.

We can see that all the containers are distributed over all three notes of our cluster.

kubectl get pods -o wide

With top we can monitor the cpu and memory usage of all our nodes.

kubectl top nodes

Now if we lower our maximum number of containers, the Autoscaler will delete the excess containers immediately.
If the number of containers is between the minimum and maximum amount the Autoscaler will wait and only destroy containers if they are not needed for about 5 Minutes.

If we want to reboot one of the nodes we have to cordon and drain it.
Cordon mean, no new containers will be created on this node. Drain means the existing containers get destroyed and rebuild on other nodes.

kubectl cordon k3s-01.home
kubectl drain k3s-01.home

To re-enable the node you need to uncordon it.

kubectl uncordon k3s-01.home

Now new containers may be created on this node again, but no containers will be moved immediately.

At last we can watch how the autoscaler destroys the additional nginx-containers.

profile picture of the author

Jannik Rehkemper

I'm an professional Linux Administrator and Hobby Programmer. My training as an IT-Professional started in 2019 and ended in 2022. Since 2023 I'm working as an Linux Administrator.