Kube360 Nodes
Introduction
This document provides an overview of the current node architecture and configuration within the Kube360 cluster. It details the roles of different nodes, including dedicated workloads, and explains the high-availability (HA) components used for the control plane.
Cluster Nodes
The Kube360 cluster is currently comprised of 5 nodes. These nodes are split between highly available control-plane masters and dedicated worker nodes.
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ns5019222 Ready control-plane,etcd,master 29d v1.33.6+k3s1
ns577110 Ready control-plane,etcd,master 29d v1.33.6+k3s1
postgres-one Ready <none> 3d v1.33.6+k3s1
postgres-two Ready <none> 2d5h v1.33.6+k3s1
replica-1 Ready control-plane,etcd,master 29d v1.33.6+k3s1
Dedicated Workload Nodes
The two nodes prefixed with postgres- (postgres-one and postgres-two) are worker nodes dedicated exclusively to PostgreSQL workloads. They are configured with appropriate node affinity and tolerations to ensure that no other general-purpose pods are scheduled on them. This guarantees resource isolation and performance for the database tier.
High Availability: Keepalived and HAProxy
To ensure resilience and uptime, HAProxy and Keepalived play a major role in the highly available (HA) nature of the K3s control plane.
- Keepalived: Manages a single, "floating" virtual IP address (VIP) that can seamlessly move between control plane servers. It ensures that the VIP is always assigned to a healthy node, providing a stable, highly available endpoint for the Kubernetes API.
- HAProxy: While Keepalived ensures a master node is reachable at the VIP, haproxy manages the traffic to the actual K3s API server process.
How They Work Together:
- A worker node needs to talk to the Kubernetes API. It sends a request to the VIP: 192.168.1.200:6443.
- Keepalived has made sure that one of your master nodes is currently "owning" the 192.168.1.200 IP address.
- The HAProxy service running on that active master receives the request.
- HAProxy looks at its list of healthy backend servers and forwards the request to one of them (e.g., 192.168.1.11:6444).
Keepalived Configuration on Nodes
To allow Keepalived to bind to the virtual IP address even if it is not currently assigned to the local network interface, the kernel must be configured to permit non-local binding.
Apply the following configuration to each node:
echo "net.ipv4.ip_nonlocal_bind = 1" | sudo tee -a /etc/sysctl.conf
Reload the system variables to apply the changes:
sudo sysctl -p
Verify that the setting has been successfully applied:
sysctl net.ipv4.ip_nonlocal_bind
Testing HAProxy and Keepalived
To verify that the networking components are working as expected, SSH into the node and view the active listening ports:
ubuntu@ns5019222:~$ sudo netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.1.11:2381 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:2380 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:2379 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN 3720642/systemd-res
tcp 0 0 127.0.0.1:6445 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:6444 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:6444 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:2382 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:10259 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:10257 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.11:10250 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 192.168.1.200:6443 0.0.0.0:* LISTEN 2023740/haproxy
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 127.0.0.1:10010 0.0.0.0:* LISTEN 3294580/containerd
tcp 0 0 127.0.0.1:10256 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:10258 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 3294551/k3s server
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 3720642/systemd-res
tcp6 0 0 :::22 :::* LISTEN 1/systemd
Key things to verify from the output:
- HAProxy should be exposed (listening on
192.168.1.200:6443in this example). - You won't see Keepalived in this port list since it uses a specific Linux mechanism for managing the Virtual IP, not a standard TCP listening port.
You can further test that the virtual IP is correctly routing to the Kubernetes API server from the node via curl.
Query the Kubernetes readiness endpoint using the stable VIP:
$ curl --insecure https://192.168.1.200:6443/readyz
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Receiving a JSON response with an "Unauthorized" status (code 401) is a good sign. It confirms that your request successfully reached the Kubernetes API server running on an active node, and the API server properly responded to the unauthenticated request.