Skip to content

## 📝 Author

Birat Aryal — birataryal.github.io Created Date: 2026-02-16
Updated Date: Monday 16th February 2026 22:14:28
Website - birataryl.com.np Repository - Birat Aryal LinkedIn - Birat Aryal DevSecOps Engineer | System Engineer | Cyber Security Analyst | Network Engineer

CAPI + vSphere UAT Cluster

This documentation covers a complete, field-tested setup of a Kubernetes workload cluster on vSphere using Cluster API.

High-level outcome

  • One control-plane node and N worker nodes created via CAPI MachineDeployments.
  • API Server VIP provided by kube-vip (192.168.35.100) and a DNS record kube-api-server.uat.local.
  • Node IPs allocated from InClusterIPPool (tmsnw-pool) via addressesFromPools.
  • Guest networking stabilized by defining the default route (and metric) in the CAPV network device spec.

Why this matters

Cluster API is extremely deterministic when three pillars are correct:

  1. vSphere primitives are accurate (template, folder, resource pool, datastore, network)
  2. Guest bootstrap is healthy (cloud-init runs, routes exist, kubeadm succeeds)
  3. Reachability exists between:
  4. management cluster -> workload VIP:6443
  5. workload nodes -> VIP:6443 (for kubeadm join)
  6. workload nodes -> DNS/NTP/package repos (if needed)

If any of those are missing, you’ll see the classic pattern: - Control plane VM exists and is ready in CAPV - Machines/VSphereMachines for workers exist but remain Pending - Controllers log cluster is not reachable: ... connect: no route to host

Continue to Architecture to understand the data plane + control plane flows.