Kubernetes is used in a lot of companies. But what if you have no experience with it? K3s by Rancher is the solution!

K3s is the certified Kubernetes distribution built for IoT & Edge computing, but it works equally well for home labs and small infrastructure. It packages everything in binaries under 40 MB that run on most Linux distributions.

Requirements

To build a high-availability K3s cluster, you need three components:

  • An external database (we use PostgreSQL)
  • An external load balancer (we use NGINX)
  • At least 3 servers with static IP addresses (we use virtualized Ubuntu servers)

Our setup uses 2 master nodes and a minimum of 1 agent node.

Database configuration

Create a database named k3s on PostgreSQL and a dedicated user k3s with full access rights to that database.

Load balancer setup

Configure NGINX to route traffic across both master nodes using a stream block:

events {}

stream {
    upstream k3s_servers {
        server 192.168.0.10:6443;
        server 192.168.0.11:6443;
    }

    server {
        listen 6443;
        proxy_pass k3s_servers;
    }
}

This routes all incoming traffic on port 6443 to the master nodes in round-robin fashion.

First master node installation

You need a datasource endpoint URL. Examples:

  • PostgreSQL: postgres://k3s_user:SuperSecret@192.168.0.5:5432/k3s
  • MariaDB/MySQL: mysql://k3s_user:SuperSecret@tcp(192.168.0.5:3306)/k3s

Install the first master node:

curl -sfL https://get.k3s.io | sh -s - server \
  --node-taint CriticalAddonsOnly=true:NoExecute \
  --tls-san 192.168.0.6 \
  --tls-san k3s.home \
  --datastore-endpoint 'postgres://k3s_user:SuperSecret@192.168.0.5:5432/k3s'

Parameters explained:

Parameter Description
server Designates this node as a master
--node-taint Prevents regular pods from being scheduled on the master
--tls-san Specifies connection URLs for TLS (load balancer address/hostname)
--datastore-endpoint Database connection string

Verify the node is up:

sudo k3s kubectl get nodes

The output should show the node as Ready with the control-plane,master role.

Second master node

Use the same installation command on the second server. It will automatically register with the same database and become part of the cluster.

Agent node installation

First, retrieve the cluster token from any master node:

sudo cat /var/lib/rancher/k3s/server/node-token

Then on each agent node:

curl -sfL https://get.k3s.io | \
  K3S_URL=https://k3s.home:6443 \
  K3S_TOKEN=<TokenFromPreviousOutput> \
  sh -

Repeat this for every agent node you want to add. In our setup we ended up with 4 agent nodes.

Retrieving the kubeconfig

Get the kubeconfig file from a master node:

sudo cat /etc/rancher/k3s/k3s.yaml

The file looks like this (values truncated for readability):

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <Base64EncodedValue>
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: <Base64EncodedValue>
    client-key-data: <Base64EncodedValue>

Important: replace the server value (127.0.0.1) with the address of your load balancer (e.g. https://k3s.home:6443) before using this file locally.

Conclusion

There you go — a fully running, highly available Kubernetes cluster using K3s. The total setup time is under an hour and the resource footprint is minimal compared to a vanilla Kubernetes installation.