Introduction
This page documents the philosophy and usage behind config-tool, which is used to configure the Kube360 data and control plane.
Philosophy Behind the Tool
The tool originated as a way to configure the software and systemd services running on the cluster. Note that our overall goal is building a Kubernetes cluster for bare metal servers (in our case, OVHCloud).
Provisioning nodes using the OVHCloud Terraform provider is not always straightforward:
- There are currently bugs with the Terraform provider. See this thread for more details.
- From the time you place an order to the time it gets delivered, it takes 1 to 2 days at a minimum.
Once you have the nodes, the following tasks still need to be done:
- Installation of the base operating system
- Configuring appropriate disk partitions, etc.
Given these challenges, we are limiting the scope of config-tool specifically to the post-installation phase.
Out of Scope:
- Node provisioning
- Disk partitioning
In Scope (What config-tool does):
- Ensures the appropriate services (e.g.
haproxy,keepalived, etc.) are running with the correct configuration. - Ensures the correct Kubernetes manifests are applied.
Usage
There are two primary ways to run this tool:
- Run locally from your machine: Ideal for checking drift and inspecting configurations.
- Run from within a Docker container: This is required when certain manifests (like the
k3sinstallation file, etc.) are expected to be on a specific path within the filesystem. In this workflow, the tool copies the executable to the remote machine viarsyncand then executes a subcommand of that executable directly on the remote machine.
Usage Examples
Below are a few examples of how to use config-tool. For more specific usages, we provide appropriate documentation in the relevant sections.
Finding Node Drift Across the Cluster
You can check if any nodes have configuration that has drifted from the expected state:
❯ cargo run -- check-node-drift
INFO Checking node drift for control-plane-replica-two
INFO Checking node drift for control-plane-replica-one
INFO Checking node drift for postgres-one
INFO Checking node drift for control-plane-initial
INFO Checking node drift for postgres-two
┌───────────────────────────┬──────────────────────────┐
│ Node ┆ Out of Sync Component │
╞═══════════════════════════╪══════════════════════════╡
│ postgres-one ┆ k3s │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ control-plane-replica-one ┆ k3s, haproxy, keepalived │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ control-plane-initial ┆ k3s, haproxy, keepalived │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ control-plane-replica-two ┆ k3s, haproxy, keepalived │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ postgres-two ┆ k3s │
└───────────────────────────┴──────────────────────────┘
Finding Drift on a Specific File
To see a diff of a specific configuration file on a particular node:
❯ cargo run -- inspect control-plane-replica-two --file=keepalived --live
--- keepalived config: /etc/keepalived/keepalived.conf ---
--- remote
+++ local
@@ -1,15 +1,15 @@
-# /etc/keepalived/keepalived.conf (On BACKUP nodes)
+# /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface enp3s0f1np1 # <-- Your vRack network interface
virtual_router_id 51
- priority 99 # <-- Lower priority (use 99 on the next node)
+ priority 99
advert_int 1
authentication {
auth_type PASS
- auth_pass "REDACTED"
+ auth_pass "REDACTED" # <-- Use the same password on all nodes
}
virtual_ipaddress {
Inspecting Expected Configuration
If you just want to see what the generated configuration file will look like for a specific node:
❯ cargo run -- inspect control-plane-replica-two --file=keepalived
--- keepalived config: /etc/keepalived/keepalived.conf ---
# /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface enp3s0f1np1 # <-- Your vRack network interface
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass "REDACTED" # <-- Use the same password on all nodes
}
virtual_ipaddress {
192.168.1.200/24 # <-- Your VIP
}
}